Search Results

Search found 3508 results on 141 pages for 'face detection'.

Page 115/141 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Vitality of Product Information Management Showcased at OpenWorld 2012

    - by Mala Narasimharajan
     By Sachin Patel Can you hear the countdown clock ticking!! OpenWorld 2012 is almost here and as I write this Oracle is buzzing with fresh new ideas and solutions that will be showcased this year. What an exciting time for all of us to be in midst of a digital revolution. Whether it is Apple fans clamoring to find every new feature that has been added to the iPhone 5 or a startup launching a new digital thermostat (has anyone looked at the new one from Nest ), product information is a vital for companies to grow and compete in this cut-throat market. Customer today struggle to aggregate and enrich this product data from the myriad of systems they have in place to run their businesses and operations. Having a product information strategy is paramount to align your sales channels and operations with the most accurate and upto date product data. We have a number of sessions this year at OpenWorld where you can gain more insight into how Oracle’s next generation of Fusion Applications, in this case Fusion Product Hub can provide you with a solution to streamline and get control of your Product Master Data. Enabling Trusted Enterprise Product Data with Oracle Fusion Product HubTuesday, October 2nd 11:45 am, Moscone West 2022 Join me Sachin Patel, Director of Product Strategy and Milan Bhatia, VP of Development as we discuss how you can enable trusted product master data in your enterprise. In this session we plan to cover the challenges companies face today in mastering product data. The discussion will also include how Fusion Product Hub brings new and innovative features to empower your product data owners to create a holistic and rich product definition that can be leveraged across your enterprise. We will also be joined by Pawel Fidelus from Fideltronik an Early Adopter for Fusion Product Hub who will showcase their plans to implement Fusion Product Hub and the value it will bring to Fideltronik Multichannel Fulfillment Excellence in Direct-to-Consumer Market Thursday, October 4th, 12:45 am, Moscone West 2024 Do you have multiple order capture systems? Do you have difficulty in fulfilling orders for your customers across various channels and suppliers? Mark Carson, Director, Fusion DOO and Brad Kerr, Director, AGSS will be showcasing the Fusion Distributed Order Orchestration solution and how companies can orchestrate orders from multiple order capture systems and route them to the appropriate fulfillment system. Sachin Patel, Director Product Strategy for Product MDM will highlight the business pain points in consolidating and commercializing data from a Multi Channel Commerce point of view and how Fusion Product Hub helps in allowing you to provide a single source of truth to drive a singular and rich customer experience. Oracle Fusion Supply Chain Management: Customer Adoption and Experiences                                                Wednesday, October 3rd 10:15 am, Moscone West 2003 This is a great session to attend to learn about how Fusion Supply Chain Management and Fusion Product Hub Early Adopters, including Boeing and Fideltronik are leveraging Fusion Applications to improve their Supply Chain operations. Have a great OpenWorld and see you soon!!

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don’t Wait Much Longer

    - by Evelyn Neumayr
    By Kerrie Foy, Director PLM Product Marketing, Oracle Depending on severity, product compliance issues can cause various problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS), a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment, was originally adopted by the European Union in 2003 for implementation in 2006 and has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey. In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014. Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives. Additional regulations are expected from organizations such as the Food and Drug Administration in the US and similar organizations elsewhere. Meeting compliance requirements and also successfully investing in eco-friendly designs can be a major challenge. It may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes.  Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming.  However, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and thrive by implementing systematic approaches to product compliance that are more than functional bandages, they are revenue-generating engines. Consider working with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase.  Agile PG&C is a comprehensive solution that makes product compliance per corporate initiatives and regulations more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, gain rapid visibility into non-compliance issues, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise-wide systematic approach to product compliance is a competitive investment. From the start, Agile PG&C enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738.

    Read the article

  • New Training and Support Center Coming Soon!

    - by Ruth
    The CRM On Demand Training and Support Center is getting a face lift. In May 2010 we will unveil the new and improved layout, look and feel, and even some new content. Some of you told us loud and clear that you wanted an easier way to find our training courses and other important information. Well, here you are: Immediately you see the look and feel has changed and things have moved around a bit. You may ask, "How can I find the training catalog? Service requests? Downloads?" There are a few ways to find what you're looking for. You may use the search box to find training, quick guides, downloads, best practices, FAQs and more. You may also click the tabs or links in the blue bar, like Browse Training, to browse other documents and information. Here is a brief outline of the tabs and links that will help as you navigate this new tool: The Support tab provides alerts and notifications specific to your application environment. The Get Started tab is organized by role and contains links to resources aimed at helping you get the most out of your first 30 days with CRM On Demand. The Learn More tab outlines information in key topic areas, like administration, integration, and reports. Go to this tab to get the resources you need to move beyond the basics. The Release Information tab contains information specific to the current and upcoming releases of CRM On Demand. Access this tab to learn about and prepare for upgrades to your CRM On Demand application. The Best Practices tab contains a compilation of knowledge gained by experts that work with CRM On Demand day in and day out. Access this knowledge to benefit from their vast experience. The Communities tab offers connections to others in the CRM On Demand community through forums, communities, blogs, and more. The Browse training link opens the training catalog.Take a look at the instructor-led training, Webinars, quick guides, use cases, and tools available to you. The Browse Knowledge link takes you to our knowledge base where you can get answers to frequently asked questions. The Submit a Service Request link directs you to My Oracle Support where you can log a service request. The steps in that process have not changed. The Web Services Library provides simple APIs and a link to Oracle Sample Code where you can get samples that can help you build custom integrations. The Add-On Applications link allows access to our downloadable applications that allow you to extend the functionality of CRM On Demand. The Templates and Tools link provides access to resources that can help you design and build CRM On Demand to meet your company's specific needs. A lot has changed and I know it is a lot to take in. To help you out, we have a printable quick guide that you can use during this transition. As always, let us know what you think: [email protected].

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • WebCenter Marketing and Upcoming Events

    - by rituchhibber
    Events: Events: Date Event Name Location/Country October 30, 2012 ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Webcast November 1, 2012 Paper Burying Your HR Processes? Dig Your Way Out With Oracle WebCenter! Webcast November 15, 2012 Social Business Thought Leader Webcast: Three Ways to Fix Your Broken Organization, featuring Christian Finn Webcast Marketing: Marketing: WebCenter Sites Sales eVite:Embrace the Base: Create an Exceptional Online Customer Experience with Oracle WebCenter Sites Directs recipients to the Connected Customer Experience Resource Center to see the latest demos, analyst reports, and customer webcasts promoting WebCenter Sites. For more information Click  here. WebCenter Social Business Thought Leaders Series: Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and TechnologyBrian Solis, Altimeter Group digital analyst and futuristDecember 13, 2012 10am PDTRegistration available soon, find other content from this speaker here. Webcast: WebCenter Sites for Applications: Disconnected Online Customer Experience? Connect it with Oracle WebCenter November 8, 2012  eVite | Registration Page WebCenter in Action Customer & Partner webcast series: Started earlier in FY13, a new webcast series featuring WebCenter customer deployments that are executed by a partner.The next webcast in the series will be November 14th:Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Click here to learn more. OnDemand Webcast: ResCare Solves Content Lifecycle Challenges with Oracle WebCenterComplex documents must be created, assembled, reviewed, and tracked. To avoid fragmented, chaotic information processes, organizations must adopt an integrated set of strategies, standards, best practices, and technologies for managing information. Attend this webcast to learn how Oracle WebCenter has allowed ResCare to: solve content lifecycle challenges, reduce compliance and business risks and increase adoption of intranet as primary business communication tool. On-Demand Assets Date Event Name Location/Country On Demand Avoid Social Media Fatigue - Learn the 9 C’s of Customer Engagement, featuring Ray Wang, Principal Analyst and CEO, Constellation Research Webcast On Demand WebCenter in Action Series: Hitachi Data Systems Improves Global Web Experience with Oracle WebCenter, presented by Hitachi Data Systems and Lingotek. Webcast On Demand Managing Social Relationships for the Enterprise, featuring Jeremiah Owyang, Industry Analyst, Altimeter Group and Reggie Bradford, Vice President, Oracle Webcast On Demand Oracle’s Vision for the Social-Enabled Enterprise, presented by Mark Hurd, Thomas Kurian and Reggie Bradford Webcast On Demand WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, presented by Qualcomm and Keste. Webcast On Demand Social Business Thought Leaders Series: 6 Counterintuitive Best Practices for Social Collaboration Adoption, featuring John Brunswick, Oracle. Webcast On Demand Oracle WebCenter Connects Patients and Researchers in Cancer Control Mission, presented by Canadian Partnership Against Cancer and App-Systems Webcast On Demand Oracle WebCenter: Modernize, Aggregate and Extend Your Portals Webcast On Demand Top 10 Technology Trends Driving Business Innovation, featuring Andy Mulholland, CTO, Capgemini Webcast On Demand Ancestry.com Helps Families Uncover History with Oracl e WebCenter Webcast On Demand Organic Business Networks: Doing Business in a Hyper-Connected World, featuring Mike Fauscette, GVP, IDC Webcast On Demand Social Business and Innovation, featuring John Mancini, President, AIIM Webcast On Demand Do More with Oracle WebCenter: Expand Beyond Web Experience Management Webcast On Demand Race Against the Machine, featuring Andrew McAfee, author and principal scientist at MIT Webcast On Demand Introducing Oracle WebCenter Sites 11gR1: Transforming the Online Experience Webcast On Demand Mobile is the New Face of Engagement, featuring Ted Schadler, Vice President & Principal Analyst, Forrester Research Inc Webcast Analyst Report: IDC Research: Oracle Debuts New Release of Oracle WebCenter Sites.

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Change the Way Google Search Results Display in Firefox

    - by Asian Angel
    Are you tired of the default look for search results at Google? If you want a different and customized pleasing look for them, then join us as we look at the GoogleMonkeyR User Script. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox & the Greasemonkey extension for our example here. Before Here is the standard look for search results at Google…not bad but it really does not stand out that well either. Installing the User Script You may be asking yourself what makes this particular user script different from others. Take a look at the list of goodies that you get access to and you will understand: Multiple columns of results Removes “Sponsored Links” Add numbers to the results Auto-load more results Removes web search dialogues Open links in a new tab Favicons GooglePreview Self updating Can be configured from a simple user dialogue To get started click on the Webpage Install Button. Once you click on the Webpage Install Button you will see the following window asking for confirmation to add the user script to Firefox. Click Install to complete the process. GoogleMonkeyR in Action Refreshing the same search page shown above shows a noticeable difference already. The light blue background makes the search results stand out a bit better. This is an improvement from before but you will definitely want to have a look to see just how far you can go… Right click on the Greasemonkey Status Bar Icon, go to User Script Commands, and select GoogleMonkeyR Preferences. Once you have clicked on GoogleMonkeyR Preferences the search page will be shaded out and you will have access to the user script’s preferences. This is where you can really make your search results unique looking! Here are the changes that we started out with… After refreshing our search results things looked even better. A look at the entire page of results with our browser maximized and set for two columns. If you have the Auto load more results Option enabled new results will be added very quickly as you scroll down. Our set of search results after adding Favicons & GooglePreview Images. Conclusion If you have been wanting a more dramatic and pleasing look for the search results at Google then you can not go wrong with the GoogleMonkeyR User Script. Change as little or as much as you want to get that perfect look in your browser. Link Install the GoogleMonkeyR User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysMake Firefox Show Google Results for Default Address Bar SearchesCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same Tab TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • SQL SERVER – UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL

    - by pinaldave
    I often see developers trying following syntax while using ORDER BY. SELECT Columns FROM TABLE1 ORDER BY Columns UNION ALL SELECT Columns FROM TABLE2 ORDER BY Columns However the above query will return following error. Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword ‘ORDER’. It is not possible to use two different ORDER BY in the UNION statement. UNION returns single resultsetand as per the Logical Query Processing Phases. However, if your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same resultset you can add an additional static column and order by that column. Let us re-create the same scenario. First create two tables and populated with sample data. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO If we SELECT the data from both the table using UNION ALL . -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO We will get the data in following order. However, our requirement is to get data in following order. If we need data ordered by Column1 we can ORDER the resultset ordered by Column1. -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO Now to get the data in independently sorted in UNION ALL let us add additional column OrderKey and use ORDER BY  on that column. I think the description does not do proper justice let us see the example here. -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO The above query will give the desired result. Now do not forget to clean up the database by running the following script. -- Clean up DROP TABLE t1; DROP TABLE t2; GO Here is the complete script used in this example. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO -- Clean up DROP TABLE t1; DROP TABLE t2; GO I am sure there are many more ways to achieve this, what method would you use if you have to face the similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB

    - by Pinal Dave
    Let us take a look at the application you own at your business. If you pay attention to the underlying database for that application you will be amazed. Every successful business these days processes way more data than they used to process before. The number of transactions and the amount of data is growing at an exponential rate. Every single day there is way more data to process than before. Big data is no longer a concept; it is now turning into reality. If you look around there are so many different big data solutions and it can be a quite difficult task to figure out where to begin. Personally, I have been experimenting with a lot of different solutions which allow my database to scale immediately without much hassle while maintaining optimal database performance.  There are for sure some solutions out there, but for many I even have to learn their specific language and there is a lot of new exploration to do. Honestly, what I prefer is a product, which works with the language I know (SQL) and follows all the RDBMS concepts which I am familiar with (ACID etc.). NuoDB is one such solution.  It is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. Step 1: Follow me and go to the NuoDB download page. Simply fill out the form, accept the online license agreement, and you will be taken directly to a page where you can select any platform you prefer to install NuoDB. In my example below, I select the Windows 64-bit platform as it is one of the most popular NuoDB platforms. (You can also run NuoDB on Amazon Web Services but I prefer to install it on my local machine for the purposes of this blog). Step 2: Once you have downloaded the NuoDB installer, double click on it to install it on the Windows platform. Here is the enlarged the icon of the installer. Step 3: Follow the wizard installation, as it is pretty straight forward and easy to do so. I have selected all the options to install as the overall installation is very simple and it does not take up much space. I have installed it on my C drive but you can select your preferred drive. It is quite possible that if you do not have 64 bit Java, it will throw following error. If you face following error, I suggest you to download 64-bit Java from here. Make sure that you download 64-bit Java from following link: http://java.com/en/download/manual.jsp If already have Java 64-bit installed, you can continue with the installation as described in following image. Otherwise, install Java and start from with Step 1. As in my case, I already have 64-bit Java installed – and you won’t believe me when I say that the entire installation of NuoDB only took me around 90 seconds. Click on Finish to end to exit the installation. Step 4: Once the installation is successful, NuoDB will automatically open the following two tabs – Console and DevCenter — in your preferred browser. On the Console tab you can explore various components of the NuoDB solution, e.g. QuickStart, Admin, Explorer, Storefront and Samples. We will see various components and their usage in future blog posts. If you follow these steps in this post, which I have followed to install NuoDB, you will agree that the installation of NuoDB is extremely smooth and it was indeed a pleasure to install a database product with such ease. If you have installed other database products in the past, you will absolutely agree with me. So download NuoDB and install it today, and in tomorrow’s blog post I will take the installation to the next level. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to Get Windows 7 Theme Wallpapers Without Installing Them

    - by Mysticgeek
    Are you using an older version of Windows but like the Windows 7 theme wallpapers? What if you have Windows 7 but you don’t want to install the themes just to get the wallpapers? Here is how to get them without having to install themes. This guest article was written by Ryan Dozier from the Doztech tech blog. Getting the Wallpaper on XP, Vista, or Windows 7 First download and install 7-zip on your machine (link below). After you’ve installed 7-zip, download a Windows 7 theme (link below) and right-click on the theme, select 7-Zip, and Extract to “Theme Name”… A new folder will appear with the theme name on it. When you open it, there will be a folder called DesktopBackground or something similar.   Open the folder to get the wallpapers to view the wallpapers for the theme. You can delete the extra files and just keep the wallpapers!   Getting the Wallpaper on Ubuntu Extracting the wallpaper on Ubuntu can be a little tricky. Just follow these steps and you will be able to do it. First go to the Ubuntu Software Center under the Applications menu. Search for 7zip and click on the arrow to go to the applications menu. Find the Install button and click it. It will take a couple of minutes for 7zip to install. After 7zip installs, close the Ubuntu Software Center and download a Windows 7 theme. Store it somewhere you can access it quickly. Right-click on the theme and select Rename and get rid of the themepack extension and replace it with zip. The file should be “Theme Name.zip” after you rename it. Right-click on the theme and click Extract Here. After  the extracting you will have a new folder with the theme name. Open it and go into the DesktopBackground folder to get the wallpapers. You can delete the extra files and just keep the wallpapers. If you want to get the new Windows 7 Themes Wallpapers, but don’t want to search and install them separately, this is a nice workaround. Links Get 7 zip for Windows  here Get Windows 7 Themes here Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • Solaris 11 Live CD alapú telepítés

    - by AndrasF
    Az elozo részben megigért két telepítési eljárás helyett kénytelen vagyok ebben a bejegyzésben kizárólag a Live CD-s változattal foglalkozni. Korábban nem gondoltam, hogy ennek bemutatása is több, mint 50 képernyo kimenetet igényel, ezért változtatnom kellett a korábbi tervezeten. A Solaris 11 Live CD-s telepítés elsosorban az asztali (desktop) felhasználók igényeit veszi figyelembe és kizárólag x86-os architektúrájú gépeken támogatott (annak ellenére, hogy SPARC-os rendszerek is rendelkeznek grafikus kártyával - pl. T4-1).A folyamat két részre bontható: eloször a vendéggép kerül kialakítása VirtualBox környezetben, majd ezt követi a Solaris 11-es telepítése virtuális gépre. HCL és segédprogramok (DDT, DDU) Mielott telepíteni szeretnénk a Solaris operációs rendszert, célszeru tájékozódni fizikai rendszerünk támogatottságáról. Erre jól használható a már említett hardver kompatibilitási (HCL) lista, vagy az alábbi két segédprogram: Device Detection Tool Device Driver Utility Mindkét alkalmazás képes rendszerünk hardver komponenseit feltérképezni és ellenorizni azok meghajtóprogram (driver) ellátottságát. Eltérés köztük abban nyilvánul meg, hogy míg a DDT futtatásához Java szükséges, addig a DDU Solarist igényel. Ez utóbbiról a telepítés során röviden szó fog esni. Telepíto készletek letöltési helye Hálózati installációtól eltekintve (*) telepítokészletre van szükségünk, mely az alábbi oldalról töltheto le. Célszeru letöltenünk mindhárom állományt és a csomagokat tartalmazó ún. repository médiát (a következo felsorolás utolsó eleme) is: sol-11-1111-live-x86.iso sol-11-1111-text-x86.iso sol-11-1111-ai-x86.iso sol-11-1111-repo-full.iso Az elso három változat indítható USB formátumban is rendelkezésre áll - ekkor iso végzodés helyett usb található a fájlnevek végén. Rövid utalást az egyes készletek feladatáról az elozo blog bejegyzés tartalmaz (link). Amennyiben SPARC architektúrájú rendszerre szeretnénk a telepítést végezni, 'x86' helyett a 'sparc' szöveget tartalmazó állományokra lesz szükség. (*) - arra is lehetoség van, hogy AI készletrol történo indítás segítségével végezzük a hálózaton keresztül történo telepítést. Ez akkor fontos, ha célgépünkön nincs PXE (Preboot Execution Environment) boot támogatás. VirtualBox konfigurálás Külön fizikai eszköz felhasználása nélkül virtuális környezetben is használható a Solaris 11, mint vendéggép. A VirtualBox használatával erre kényelmes lehetoség kínálkozik. Gazdagépünknek (Windows, Unix, Linux) megfelelo telepíto program, vagy programcsomag (jelenleg a 4.1.16-os verzió a legfrissebb változat) és az installációt is taglaló felhasználói kézikönyv letöltheto a termék oldaláról. A sikeres telepítést követoen az alábbi lépések során jutunk el az új virtuális gép kialakulásáig: 1. A VBox indítása után a központi ablak megmutatja a már létezo virtuális gépeinket (Sol11demo, Sol11u1b07, Sol11.1B16, Sun_ZFS_Storage_7000) és az aktuálisan kiválasztott egyed (Sol11demo) fobb jellemzoit (megnevezés, memória mérete, virtuális tároló eszközök listája...stb.) 2. A New gombra kattintva elindul a virtuális gépet létrehozó segéd (wizard) 3. Ezt követoen nevet kell adnunk a vendéggépnek és ki kell választanunk az operációs rendszer típusát (beszédes név használata esetén a VirtualBox képes az operációs rendszer családját kiválasztani, nekünk pusztán csak verziót kell beállítanunk): adjuk meg Solaris11-et névként és válasszuk a 64bites változatot (feltéve, hogy gazdagépünk támogatja ezt) 4. Telepítéshez és a kezdeti lépések megtételéhez 1536MB memória tökéletesen megfelel (ez késobb módosítható az elvárások függvényében) 5. Fizikai társaihoz hasonlóan, egyetlen virtuális gép sem létezhet merevlemez (jelen esetben virtuális diszk) nélkül. Használhatunk egy már létezo területet (virtuális lemezt tartalmazó állomány), de létrehozhatunk egy nekünk tetszo új példányt is. Maradjunk ez utóbbinál (Create new hard disk)! 6. A lehetséges formátumok közül - az egyszeruség okán - éljünk a felkínált alaptípussal (VDI - VirtualBox Disk Image). 7. Létrehozás során a virtuális lemez készülhet egyidejuleg (Fixed size), vagy több lépésben dinamikusan (Dynamically allocated). Az elso változat sokkal kevésbé terheli a rendszert, a második elonye pedig a helytakarékosság. Válasszuk a fix méretu változatot. 8. Most már csak egyetlen adat ismeretlen a VirtualBox számára, mégpedig a létrehozásra kerülo virtuális lemez nagysága. 8GB-os terület jelen esetben alkalmas az ismerkedés elkezdéséhez. 9. Amennyiben minden beállítást helyesen adtunk meg, a Create gomb megnyomása után elindul a virtuális lemez létrehozása. 10. Ez a muvelet a megadott adatoktól függoen néhány perc alatt befejezodik. 11. Hasonló megerosítés (Create gomb aktiválása) után elkezdodik a kért virtuális gép létrehozása is. 12. Sikeres végrehajtás után az új vitruális gép közvetlenül megjelenik a központi ablak baloldali listáján a rendelkezésre álló virtuális gépek közt. A blog bejegyzés folyamatosan frissül...a rész fennmaradó tartalma hamarosan felkerül az oldalra.

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >