Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 419/548 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • Top 10 Linked Blogs of 2010

    - by Bill Graziano
    Each week I send out a SQL Server newsletter and include links to interesting blog posts.  I’ve linked to over 500 blog posts so far in 2010.  Late last year I started storing those links in a database so I could do a little reporting.  I tend to link to posts related to the OLTP engine.  I also try to link to the individual blogger in the group blogs.  Unfortunately that wasn’t possible for the SQLCAT and CSS blogs.  I also have a real weakness for posts related to PASS. These are the top 10 blogs that I linked to during the year ordered by the number of posts I linked to. Paul Randal – Paul writes extensively on the internals of the relational engine.  Lots of great posts around transactions, transaction log, disaster recovery, corruption, indexes and DBCC.  I also linked to many of his SQL Server myths posts. Glenn Berry – Glenn writes very interesting posts on how hardware affects SQL Server.  I especially like his posts on the various CPU platforms.  These aren’t necessarily topics that I’m searching for but I really enjoy reading them. The SQLCAT Team – This Microsoft team focuses on the largest and most interesting SQL Server installations.  The regularly publish white papers and best practices. SQL Server CSS Team – These are the top engineers from the Microsoft Customer Service and Support group.  These are the folks you finally talk to after your case has been escalated about 20 times.  They write about the interesting problems they find. Brent Ozar – The posts I linked to mostly focused on the relational engine: CPU, NUMA, SSD drives, performance monitoring, etc.  But Brent writes about a real variety of topics including blogging, social networking, speaking, the MCM, SQL Azure and anything else that seems to strike his fancy.  His posts are always well written and though provoking. Jeremiah Peschka – A number of Jeremiah’s posts weren’t about SQL Server.  He’s very active in the “NoSQL” area and I linked to a number of those posts.  I think it’s important for people to know what other technologies are out there. Brad McGehee – Brad writes about being a DBA including maintenance plans, DBA checklists, compression and audit. Thomas LaRock – I linked to a variety of posts from PBM to networking to 24 Hours of PASS to TDE.  Just a real variety of topics.  Tom always writes with an interesting style usually mixing in a movie theme and/or bacon. Aaron Bertrand – Many of my links this year were Denali features.  He also had a great series on bad habits to kick. Michael J. Swart – This last one surprised me.  There are some well known SQL Server bloggers below Michael on this list.  I linked to posts on indexes, hierarchies, transactions and I/O performance and a variety of other engine related posts.  All are interesting and well thought out.  Many of his non-SQL posts are also very good.  He seems to have an interest in puzzles and other brain teasers.  Michael, I won’t be surprised again!

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    We’ve been extremely busy here on the Oracle WebCenter team. We hope that you’ve all be keeping up with the interesting news each week. Last week was jammed full of GartnerPCC and Gartner360 buzz. If you missed any of the highlights – be sure to check out both Kellsey’s post from last week: Gartner PCC: A Shovel & Some Ah-Ha's and Christie’s overview of Loren Weinberg’s PCC presentation: "Here Today, Gone Tomorrow: Engage Your Customers or Lose Them"  . This week, we’ll be focusing on “Doing More with WebCenter” leading up to a great webcast scheduled for Thursday, March 22 (invite and registration link below). This is the 2nd in a series of 3 webcasts dedicated to expanding the understanding of the full capabilities of WebCenter. Yes – that might mean that you are not getting the full benefits of the software you already own or the expansion potential via upgrade to the full WebCenter Suite Plus. Tune in on Thursday 10 a.m. PT / 1 p.m. ET.  ++++++++++++++ Want to be a Speaker at Oracle OpenWorld 2012? Oracle Open World planning has already kicked off. We know that it is only March and next October is far in the distance. But planning has already started for Oracle OpenWorld 2012. So if you want to be a speaker and propose your own session for this year's event in San Francisco on September 30th - October 4th, starting thinking now!  The annual OpenWorld Call for Papers is now open until April 9th! All of the details to submit a paper are available here. Of course, the WebCenter team here is interested in sessions including case studies, thought-leadership, customer stories around any of the Oracle WebCenter solutions, but the Call for Papers is open to all Oracle topics. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. Sell your session, because there will be a lot of competition to be selected.  Bonus News: Speakers for selected sessions receive a complimentary full conference pass! Get your papers in and we'll see you in San Francisco! ~~~~~~~~~~~~~~~~~~~~~~ Webcast Series: Do More with Oracle WebCenter - Expand Beyond Content Management Enable Employees, Partners, and Customers to Do More with Your Content Dear [FIRSTNAME] [LASTNAME],-- Did you know that, in addition to content management, Oracle WebCenter now also includes comprehensive portal, composite application, collaboration, and Web experience management capabilities? Join us for this Webcast and learn how you can provide a new level of user engagement. Learn how Oracle WebCenter: Drives task-specific application data and content to a single screen for executing specific business processes Enables mixed internal and external environments where content can be securely shared and filtered with employees, partners, and customers, based upon role-based security Offers Web experience management, driving contextually relevant, social, and interactive online experiences across multiple channels Provides social features that enable sharing, activity feeds, collaboration, expertise location, and best-practices communities Learn how to do more with Oracle WebCenter. Register now for the Webcast. Register Now Join us for the second Webcast in the series "Do More With Oracle WebCenter". March 22, 2012 10 a.m. PT / 1 p.m. ET Presented by: Michelle Huff Senior Director, WebCenter Product Management, Oracle Greg Utecht Project Manager,IT Operations,TIES Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • Most Unprofessional Workplace

    - by TehGrumpyCoder
    I've worked lots of places in lots of roles: Delivery truck driver, Boilermaker, antenna rigger, Professional Musician, Electronic Technician, Electrical Engineer, and for most of my career: Software Turkey. I want to say this large company is the most unprofessional place I've ever worked, but then I think about other jobs such as TTI that stiffed us all for 10 months salary -- or had us work 2-1/2 years at 66% however you want to look at it, or maybe NeoPlanet with a cast from a bad sitcom running the show, I could go on, but I digress (as usual). So maybe this place isn't the *most* unprofessional, but the personnel rank up there. I'm in a small room off a factory. There are 3 managerial offices, and 36 common-folk of various skill-sets in a variety of single to quad cubicles. No matter where you sit though, because of the layout and location, you've got a hard wall as one wall of your cubicle. Because of that hard wall, everything echoes. I get off the phone, and the guy in the next cubicle makes a comment in response to my phone conversation... I hate that it can be heard and I hate that they do that! These people have no problem yelling from cube to cube to carry on running conversations some of which are actually work-related. There's a lady two cubes away that talks so loud I can clearly hear every phone conversation she has... all work-related but still... Then the one in the next cubicle must have been raised on a farm because there's only one volume setting: LOUD... "HEY MARGE, CAN I GET IN FOR A QUICK APPOINTMENT AFTER WORK TONIGHT?" ... sigh Also that cube is the 'party cube' so that's where all the candy, cake, donuts, and leftovers sits. Anything MzLoud brings in has to have a verbal recipe associated with it at least 10 times during the day, and of course at volume. I've had running conversations over the top of my cube from people in the next one on each side. The weird thing is... the boss sits with an open door closer to this whole fiasco than me. So I wear a pair of Bose noise-cancelling headphones, and crank up Kenny Burrell, Herb Ellis, Wes Montgomery, or Jimmy Smith to the point I can't hear the racket... what the heck, I already have a hearing loss from playing guitar.

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • Tap Into Tier 1 ERP

    - by Christine Randle
    By: Larry Simcox, Senior Director, Accelerate Corporate Programs     Your customers aren’t satisfied with so-so customer service. Your employees aren’t happy with below average salaries.   So why would you settle for second-rate or tier 2 ERP?   A recent report from Nucleus Research found that usability improvements and rapid implementation tools are simplifying deployments, putting tier 1 enterprise applications well within reach for midsize companies. So how can your business tap into the power of tier 1 ERP? And what are the best ways to manage a deployment?   The Reputation of ERP Implementations Overhauling internal operations and implementing ERP can be a challenging endeavor for organizations of all sizes. Midsize companies often shy away from enterprise-class ERP, fearing complexity, limited resources and perceived challenging deployments. Many forward thinking executives experienced ERP implementations in the late 90s and early 2000s and embrace a strategy to grow their business by investing in a foundation for innovation and growth via ERP modernization projects.   In recent years there has been a strong consumerization of IT with enterprise applications and their delivery methods evolving to become more user-friendly.  Today, usability improvements and modern implementation tools have made top-tier ERP solutions more accessible for growing companies. Nucleus found that because enterprise-class software can now be rapidly deployed, the payback is quicker, the risks are lower, the software is less disruptive and overall, companies can differentiate themselves from their competitors and achieve more success with the advantages these types of systems deliver.   Tapping into the power of tier 1 ERP can be made much easier with Oracle Accelerate solutions. Created by Oracle's expert partners and reviewed by Oracle, Oracle Accelerate solutions are simple to deploy, industry-specific, packaged solutions that provide a fast time to benefit, which means getting the right solution in place quickly, inexpensively with a controlled scope and predictable returns.   How are growing midsize companies successfully deploying tier 1 ERP? According to Nucleus Research, companies can increase success in their tier 1 ERP deployments by limiting customization, planning a rapid go-live, bettering communication across departments, and considering different delivery options. Oracle Accelerate solutions incorporate industry best practices and encourage rapid deployments. And even more, Nucleus found customers deploying tier 1 ERP with Oracle that had used Oracle Business Accelerators, Oracle’s rapid implementation tools, reduced the time to deploy Oracle E-Business Suite by at least 50 percent.   Industrial manufacturer L.H. Dottie is one company that needed ERP with enhanced capabilities to support its growth and streamline business processes. Using out-of-the-box configuration of Oracle E-Business Suite modules (provided by Oracle Business Accelerators and delivered by Oracle Partner C3 Business Solutions), L.H. Dottie was able to speed its implementation and went live in just six and a half months. With tier 1 ERP, the company was able to grow and do its business better, automating a variety of processes, accelerating product delivery and gaining powerful data analysis capabilities that helped drive its business into further regions. See more details about their ERP implementation here.   Tier 1 enterprise-class applications have proven to boost the success of Oracle’s midsize customers. As Nucleus Research iterates, companies poised for growth or seeking to compete against larger competitors absolutely can tap into the power of tier 1 ERP and position themselves as enterprise-class through leveraging Oracle Accelerate solutions.   You can learn more here about The Evolving Business Case for Tier - 1 ERP in Midsize Companies in our exclusive webcast with Nucleus.   ###  

    Read the article

  • Everything You Need to Know About Monitoring Oracle GoldenGate

    - by Irem Radzik
    By Joe deBuzna Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Having over 16 years of database replication experience with 6 of those split between complex Oracle GoldenGate installations across three continents and researching monitoring requirements for both GoldenGate core replication and the GoldenGate monitoring GUIs, I've seen GoldenGate used and monitored in every way conceivable. And definite patterns have emerged. Next week at OpenWorld, on Tuesday Oct 2nd at 5pm please come by to Mascone West-3005 for "Everything you need to know about Monitoring Oracle GoldenGate"session to hear me discuss how GoldenGate customers are monitoring their implementations today, common methods and tricks, what's new in the GUIs, and a what's on the roadmap ahead. As you may have seen in previous blog posts and in our launch webcast we have now Plug-in for Oracle Enterprise Manager in addition to the new Oracle GoldenGate Monitor product. For those of you who won't be at OpenWorld, please check out our Management Pack for Oracle GoldenGate data sheet and Oracle GoldenGate 11gR2 New Features white paper to learn more about the new Oracle GoldenGate 11gR2 release. In this latest release we also have enhanced conflict detection and resolution. It is a cornerstone of any Active-Active database replication solution. And in the latest release we just took ours to the next level with built in optimized resolution routines (no more dependency on sqlexec!). At OpenWorld we have a session CON8557 - Best Practice for Conflict Detection & resolution 3:30-4:30 on Wed Oct 3rd at Mascone West- 3005. Oracle Development Manager Bharath Aleti and I will highlight the most commonly used options and best practices gained from our interaction with numerous customers and consultants. Hope you can join us next week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • How can I thoroughly evaluate a prospective employer?

    - by glenviewjeff
    We hear much about code smells, test smells, and even project smells, but I have heard no discussion about employer "smells" outside of the Joel Test. After much frustration working for employers with a bouquet of unpleasant corporate-culture odors, I believe it's time for me to actively seek a more mature development environment. I've started assembling a list of questions to help vet employers by identifying issues during a job interview, and am looking for additional ideas. I suppose this list could easily be modified by an employer to vet an employee as well, but please answer from the interviewee's perspective. I think it would be important to ask many of these questions of multiple people to find out if consistent answers are given. For the most part, I tried to put the questions in each section in the order they could be asked. An undesired answer to an early question will often make follow-ups moot. Values What constitutes "well-written" software? What attributes does a good developer have? Same question for manager. Process Do you have a development process? How rigorously do you follow it? How do you decide how much process to apply to each project? Describe a typical project lifecycle. Ask the following if they don't come up otherwise: Waterfall/iterative: How much time is spent in upfront requirements gathering? upfront design? Testing Who develops tests (developers or separate test engineers?) When are they developed? When are the tests executed? How long do they take to execute? What makes a good test? How do you know you've tested enough? What percentage of code is tested? Review What is the review process like? What percentage of code is reviewed? Design? How frequently can I expect to participate as code/design reviewer/reviewee? What are the criteria applied to review and where do the criteria come from? Improvement What new tools and techniques have you evaluated or deployed in the past year? What training courses have your employees been given in the past year? What will I be doing for the first six months in your company (hinting at what kind of organized mentorship/training has been thought through, if any) What changes to your development process have been made in the past year? How do you improve and learn from your mistakes as an organization? What was your organizations biggest mistake in the past year, and how was it addressed? What feedback have you given to management lately? Was it implemented? If not, why? How does your company use "best practices?" How do you seek them out from the outside or within, and how do you share them with each other? Ethics Tell me about an ethical problem you or your employees experienced recently and how was it resolved? Do you use open-source software? What open-source contributions have you made? Follow-Ups I liked what @jim-leonardo said on this Stack Overflow question: Really a thing to ask yourself: "Does this person seem like they are trying to recruit me and make me interested?" I think this is one of the most important bits. If they seem to be taking the attitude that the only one being interviewed is you, then they probably will treat you poorly. Good interviewers understand they have to sell the position as much as the candidate needs to sell themselves. @SethP added: Glassdoor.com is a good web site for researching potential employers. It contains information about how specific companies conduct interviews...

    Read the article

  • How to convince my boss that quality is a good thing to have in code?

    - by Kristof Claes
    My boss came to me today to ask me if we could implement a certain feature in 1.5 days. I had a look at it and told him that 2 to 3 days would be more realistic. He then asked me: "And what if we do it quick and dirty?" I asked him to explain what he meant with "quick and dirty". It turns out, he wants us to write code as quickly as humanly possible by (for example) copying bits and pieces from other projects, putting all code in the code-behind of the WebForms pages, stop caring about DRY and SOLID and assuming that the code and functionalities will never ever have to be modified or changed. What's even worse, he doesn't want us do it for just this one feature, but for all the code we write. We can make more profit when we do things quick and dirty. Clients don't want to pay for you taking into account that something might change in the future. The profits for us are in delivering code as quick as possible. As long as the application does what it needs to do, the quality of the code doesn't matter. They never see the code. I have tried to convince him that this is a bad way to think as the manager of a software company, but he just wouldn't listen to my arguments: Developer motivation: I explained that it is hard to keep developers motivated when they are constantly under pressure of unrealistic deadlines and budget to write sloppy code very quickly. Readability: When a project gets passed on to another developer, cleaner and better structured code will be easier to read and understand. Maintainability: It is easier, safer and less time consuming to adapt, extend or change well written code. Testability: It is usually easier to test and find bugs in clean code. My co-workers are as baffled as I am by my boss' standpoint, but we can't seem to get to him. He keeps on saying that by making things more quickly, we can sell more projects, ask a lower price for them while still making a bigger profit. And in the end these projects pay the developer's salaries. What more can I say to make him see he is wrong? I want to buy him copies of Peopleware and The Mythical Man-Month, but I have a feeling they won't change his mind either. A lot of you will probably say something like "Run! Get out of there now!" or "I'd quit!", but that's not really an option since .NET web development jobs are rather rare in the region where I live...

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • PayPal India Problems Continues

    - by Ravish
    Reserve Bank of India has been giving hard time to PayPal and its users in India. RBI had previously blocked PayPal transactions in India a few times, and they made it difficult to withdraw payments by enforcing exports and forex related compliance. Here is yet another bad news for Indian PayPal users. With effect from March 1st, Indian users cannot receive payments of more than $500 in your PayPal account. Moreover, you cannot keep or use any funds in your PayPal account. You can use your PayPal balance to make send money for any goods or services, and must withdraw it to your bank account within 7 days of the receipt. These changes have rendered PayPal almost useless for small business, webmasters and publishers. Most webmasters and publishers rely on PayPal to receive payments from advertisers and clients. It has also made it impossible to buy anything online with PayPal. Sending payments abroad via other channels is already a pain, sending a bank wire requires too many formalities, documentation and time. Moreover, you are even required to deduct TDS on payments you make for any products or services. The restrictions will take effect on March 1st, so you have 30 days to complete any pending transactions you may have. This step by RBI is yet another gimmick by corrupt Indian Government to make life difficult of entrepreneurs, kill innovation, slap more taxes and create more channels to take bribes. Following is the notification from PayPal about this issue: As part of our commitment to provide a high level of customer service, we would like to give you a 30-day advance notice on changes to our user agreement for India. With effect from 1 March 2011, you are required to comply with the requirements set out in the notification of the Reserve Bank of India governing the processing and settlement of export-related receipts facilitated by online payment gateways (“RBI Guidelines”). In order to comply with the RBI Guidelines, our user agreement in India will be amended for the following services as follows: Any balance in and all future payments into your PayPal account may not be used to buy goods or services and must be transferred to your bank account in India within 7 days from the receipt of confirmation from the buyer in respect of the goods or services; and Export-related payments for goods and services into your PayPal account may not exceed US$500 per transaction. We seek your understanding as we continue to employ our best efforts to comply with the RBI Guidelines in a timely manner. Related posts:WordCamp India Ends On a High Note Silicon WordPress Theme Accord WordPress Theme

    Read the article

  • Best Ati Radeon x1200 drivers for 12.10

    - by Jaclyn
    [Long story short is at the bottom if you don't care about my ranting] Ok, well, I have the unfortunate distinction of having an Acer Extensa 4420 (Yes, the model with the faulty motherboard, and no, I do not know how it is still working either). Long story short I need the best drivers for my Ati Radeon x1200 integrated graphics card. The Ati Propriatary drivers for 12.10 no longer supports my video card. Currently when I try to play Minecraft, or any game really, the framerate is quite terrible despite the fact that my handy dandy system load monitor says that I have plenty of memory and CPU power (Mind you, this is when I'm not playing minecraft; that kind of uses up all of my resources unless it's on the worst graphical settings, and even then I have terrible framerate, but plenty of resources left over). I tried the fglrx through a workaround guide, and it completely killed my display and I had to uninstall it. I'm considering just trying to install fglrx through synaptic, but I am hesitant to do so since I don't want a repeat of the BSBDD!!! (Black Screen of Bad Drivers 'o DOOOM!!!), so I will wait until after I get some input from you fine ladies and gentlemen on to what your advice it. ok, so I'm running xubuntu 12.10 64 bit. I upgraded from 10.10 to 11.04. Then about a month ago I went from natty to quantal via the update option in the package updater, and then I decided that I was sick of gnome so I installed xfce. I did not install new drivers from natty to when I wound up at 12.10, so that shouldn't have changed, and they were indeed quite terrible back then, but now I'm using xubuntu as my main os, so I actually need good drivers. uuuugh. Anyway, long story short: I need to know what are the best drivers available for my card, and how to install them, because I am a linux novice, and I have tried everything that I can think of, including search google. Small edit: I forgot to note that since I upgraded to 12.10, my VGA out does not work (I haven't had a chance to try the s-video yet), and possibly related, the USB port on that side of my laptop does not work anymore either.

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see sooo many jobs requesting experience with it. i also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro?? it is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon?? Who is happy with a drag-n-drop button?? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it?? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It aint that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everthing I need it do anyways. Please help out here. If I just dont need to learn it, I dont want to waste the time. Jase

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • OpenWorld Day 1

    - by Antony Reynolds
    A Day in the Life of an OpenWorld Attendee Part I Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend! With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong!  I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago.  Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering.  As soon as I got my badge I bumped into a friend from Ireland!  So maybe OpenWorld isn’t so big after all! First port of call was Larrys Keynote.  As always Larry was provocative and thought provoking.  His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features.  His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers. Next day, Monday, was my first full day at OpenWorld.  The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular.  Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning. Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going. Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release.  Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing.  He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier. So much for Day 1.  I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK.  Not bad for Day 1!

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Social Analytics and the Customer

    - by David Dorf
    Many successful retailers put the customer at the center of everything they do, so its important that the customer is modeled correctly across all their systems.  The path to omni-channel starts and ends with the customer so at ARTS, our next big project is focused on ensuring a consistent representation of customers across our transactional data model, datawarehouse model, and XML schemas.  Further, we've started a new whitepaper that describes how Big Data and Social Media Analytics should be leveraged by retailers to add and additional level of customer insight. Let's start by taking a closer look at the meaning of social analytics.  Here's my definition: Social Analytics, in the retail context, describes the analysis of data obtained from social media sources in an effort to better comprehend and interact with the community of consumers.  This discipline seeks to understand what’s being said by the community about brands and products (“monitoring”), as well as understand the behaviors of those in the community (“profiling”).  The results are used to enforce the brand image, improve product decisions, and better focus marketing, all of which lead to increased sales. To help illustrate the facets of social analytics, I drew the diagram below which was originally published by Retail Touchpoints. There are lots of tools on the market that allow retailers to monitor social media for brand and product mentions.  These include analysis of sentiment, reach, share of voice, engagement, etc.  When your brand is mentioned, good or bad, its an opportunity to engage with the customer and possibly lead to a sale.  Because products are not always unique, its much more difficult to monitor product mentions, but detecting product trends early can help a retailer make better merchandising decisions, especially in fashion. Once a retailer understands what's being said, the next step is learn more about who's saying it.  That involves profiling customers beyond simple demographics to understand their motivations.  Much can be learned from patterns, and even more when customers voluntarily share their data.  Knowing that a customer is passionate about, for example, mountain biking allows the retailer to make relevant offers on helmets, ask for opinions on hydration, and help spread marketing messages. Social analytics has many facets that benefit retailers, some of which are easy but many of which are hard.  Its important for the CMO and CIO to work closely together to plan for these capabilities and monitor the maturity of tools on the market.  This is an area that will separate winners from losers.

    Read the article

  • Where can you find the Oracle Applications User Experience team in the next several months?

    - by mvaughan
    By Misha Vaughan, Applications User ExperienceNovember is one of my favorite times of year at Oracle. The blast of OpenWorld work is over, and it’s time to get down to business and start taking our messages and our work on the road out to the user groups. We’re in the middle of planning all of that right now, so we decided to provide a snapshot of where you can see us and hear about the Oracle Applications User Experience – whether it’s Fusion Applications, PeopleSoft, or what we’re planning for the next-generation of Oracle Applications.On the road with Apps UX...In December, you can find us at UKOUG 2012 in Birmingham, UK: UKOUG, UK Oracle User Group Conference 2012?December 3 – 5, 2012?ICC, Birmingham, UKIn March, we will be at Alliance 2013 in Indianapolis, and our fingers are crossed for OBUG Connect 2013 in Antwerp:? Alliance 2013March 17 - 20, 2013 ?Indianapolis, IndianaOBUG Benelux Connect 2013?March 26, 2013?Antwerp, Belgium?? In April, you will see us at COLLABORATE13 in Denver:? Collaborate13April 7 - April 11, 2013 ?Denver, Colorado?? And in June, we round out the kick-off to summer at OHUG 2013 in Dallas and Kscope13 in New Orleans:? OHUG 2013June 9 -13, 2013?Dallas, Texas ODTUG Kscope13?June 23-27, 2013 ?New Orleans, LA? The Labs & DemosAs always, a hallmark of our team is our mobile usability labs. If you haven’t seen them, they are a great way for customers and partners to get a peek at what Oracle is working on next, and a chance for you to provide your candid perspective. Based on the interest and enthusiasm from customers last year at Collaborate, we are adding more demo-stations to our user group presence in the year ahead. If you want to see some of the work we are doing first-hand but don’t have a lot of time, the demo stations are a great way to get a quick update on the latest wow-factor we are researching. I can promise that you will see whatever we think is new and interesting at the demo stations first. Oracle OpenWorld 2012 Apps UX DemostationFor Applications DevelopersMore and more, I get asked the question, “How do I build an application that looks like a Fusion?” My answer is Fusion Applications Design Patterns. You can find out more about how Fusion Applications developers can leverage ADF and the user experience best practices we developed for Fusion at sessions lead by Ultan O’Broin, Director of Global User Experience, in the year ahead. Ultan O'Broin, On Fusion Design Patterns Building mobile applications are also top of mind these days. If you want to understand how Oracle is approaching this strategy, check out our session on Mobile user experience design patterns with Mobile ADF.  In many cases, this will be presented by Lynn Rampoldi-Hnilo, Senior Manager of Mobile User Experiences, and in a few cases our ever-ready traveler Ultan O’Broin will be on deck. Lynn Rampoldi-Hnilo, on Mobile User Experience Design PatternsApplications User ExperiencesFusion Applications continues to evolve, and you will see the new face of Fusion Applications at our executive sessions in the year ahead, which are led by vice president Jeremy Ashley or a hand-picked presenter, such as one of our Fusion User Experience Advocates.  Edward Roske, CEO InterRel Consulting & Fusion User Experience AdvocateAs always, our strategy is to take our lessons learned and spread them across the Applications product lines. A great example is the enhancements coming in the PeopleSoft user experience, which you can hear about from Harris Kravatz, Senior Manager, PeopleSoft User Experience. Fusion Applications ExtensibilityWe can’t talk about Fusion Applications without talking about how to make it look like your business. If tailoring Fusion applications is a question in your mind, and it should be, you should hit one of these sessions. These sessions will be lead by our own Killian Evers, Senior Director, Tim Dubois, User Experience Architect, and some well-trained Fusion User Experience Advocates.Find out moreIf you want to stay on top of where and when we will be, you can always sign up for our newsletter or check out the events page of usableapps.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >