Search Results

Search found 30815 results on 1233 pages for 'build xml'.

Page 598/1233 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by simonc
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly. Cross posted from Simple Talk.

    Read the article

  • Inside Red Gate - Exercises in Leanness

    - by Simon Cooper
    There's a new movement rumbling around Red Gate Towers - the Lean Startup. At its core is the idea that you don't have to be in a company with single-digit employees to be an entrepreneur; you simply have to (being blunt) not know what you should be doing. Specifically, you accept that you don't know everything you need to know in order to create a useful, successful & profitable product. This is something that Red Gate has had problems with in the past; we've created products that weren't aimed at the correct market, or didn't solve the problem the user had (although they solved the problem we thought the users had, or the problem the users thought they had). As a result, these products weren't as successful as they could have been. The ideas at the core of the Lean Startup help to combat this tendency to build large, well-engineered products that solve the wrong problem. You need to actually test your hypotheses about what the users and the market needs, rather than just running a project based on those untested assumptions. Furthermore, these tests need to be done as fast as possible (on the order of a week) so that, if necessary, you can change the direction of the project without wasting effort going down a dead end. Over time, as more tests are done and more hypotheses are confirmed or refuted, the project moves towards something that solves users' actual problems. However, re-aligning the development teams that operate within Red Gate along these lines does itself have some issues; we've got very good at doing large, monolithic releases, with a feature set decided well in advance. Currently it takes about 2 weeks to do install & release testing before a release; this is clearly not practicable for a team doing weekly, or even daily releases. There's also many infrastructure issues to be solved; in our source control, build system, release mechanism, support pages & documentation, licensing system, update system, and download pages. All these need modifications to allow the fast releases necessary for each experiment. Not only do we have to change our infrastructure, we have to change our mindset. Doing daily releases means each release won't get nearly as much testing as 'standard' releases. As a team, we have to be prepared that there will be releases that have bugs and issues with them; not only do we have to be prepared to change direction with every experiment we do, but we have to be ready to fix any bugs that are reported very quickly as well. The SmartAssembly team is spearheading this move towards leanness within the company, using Feature Usage Reporting (FUR). We think this is a cracking feature that will really help developers learn how people use their products, but we need to confirm this hypothesis. So, over the next few weeks, we'll be running a variety of experiments on SmartAssembly to either confirm or refute our hypotheses concerning how people use SmartAssembly and apply FUR to their own products. In the rest of this series, I'll be documenting how the experiments we perform get on, and our experiences with applying the Lean Startup model to a mature product like SmartAssembly.

    Read the article

  • Should I upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" and what care do I need to take if I upgrade?

    - by PHPLover
    I'm basically a Web Developer(PHP Developer) by profession. I mainly work on PHP, jQuery, AJAX, Smarty, HTML and CSS, Bootstrap front-end web development framework. I've also installed and using IDEs/editors like Sublime Text, NetBeans. I'm also using Git repository for my website development as a versioning tool. I'm using "Ubuntu 12.04 LTS" on my machine almost since last two years. My machine configuraion is as follows: Memory : 3.7 GiB Processor : Intel® Core™ i3 CPU M 370 @ 2.40GHz × 4 Graphics : Unknown OS type : 64-bit Disk : 64-bit The important softwares present on my machine and which I'm using daily for my work are as follows: PHP : PHP 5.3.10-1ubuntu3.13 with Suhosin-Patch (cli) (built: Jul 7 2014 18:54:55) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apache web server : /usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted) Server version: Apache/2.2.22 (Ubuntu) Server built: Jul 22 2014 14:35:25 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.3.12 Compiled using: APR 1.4.6, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/apache2/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" MySQL : 5.5.38-0ubuntu0.12.04.1 Smarty : 2.6.18 **NetBeans :** NetBeans IDE 8.0 (Build 201403101706) Sublime Text 2 : Version 2.0.2, Build 2221 Yesterday suddenly a pop-up message appeared on my screen asking me to upgrade to "Ubuntu 14.04 'Trusty Tahr'". I'd also be very happy to upgrade my system to "Ubuntu 14.04 'Trusty Tahr'". Following are the issues about which I'm little bit scared about and I need you all talented people's expert advice/help/suggestions on it: Will upgrading to "Ubuntu 14.04 'Trusty Tahr'" affect the softwares I mentioned above? I mean will I need to re-install/un-install and install these softwares too? Do I really need to and is it really a worth to upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" now? If I upgrade to "Ubuntu 14.04 'Trusty Tahr'" what advantage I'll get from web developer's point of view? Will the upgrade be hassle free and will I be ablr to continue my on-going work without any difficulties? Is "Ubuntu 14.04 'Trusty Tahr'" a LTS version and if yes till when it's going to provide support? These are the five crucial queries I have. If you want any further explanation from me please feel free to ask me. Thanks for spending some of your vaulable time in reading and understanding my issue. Any kind of help/comment/suggestion/answer would be highly appreciated. Though if someone gives canonical, precise and up to the mark answer, it will be of great help to me as well as other web developers using Ubuntu around the world. Once again thank you so much you great people around the globe. Waiting for your precious replies.

    Read the article

  • How to fix Software Center crashes?

    - by shandna towns
    E: Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list E: The list of sources could not be read. shandanatowns@ubuntu:~$ software-center 2012-09-25 12:23:35,115 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-25 12:23:35,123 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2012-09-25 12:23:35,472 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-25 12:23:35,477 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-09-25 12:23:35,477 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-09-25 12:23:35,605 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-25 12:23:35,987 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 257, in open self._cache = apt.Cache(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list 2012-09-25 12:23:37,000 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 182, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1385, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1323, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 151, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 173, in init_view self.cache, self.db, self.icons, self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 81, in __init__ self.build() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 322, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 120, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 251, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 236, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 132, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • Silverlight Cream for December 12, 2010 -- #1008

    - by Dave Campbell
    In this Issue: Michael Washington, Samuel Jack, Alfred Astort(-2-), Nokola(-2-), Avi Pilosof, Chris Klug, Pete Brown, Laurent Bugnion(-2-), and Jaime Rodriguez(-2-, -3-). Above the Fold: Silverlight: "Sharing resources and styles between projects in Silverlight" Chris Klug WP7: "Windows Phone Application Performance at Silverlight Firestarter" Jaime Rodriguez Training: "Silverlight View Model (MVVM) - A Play In One Act" Michael Washington Shoutouts: Koen Zwikstra announced the availability of the first Silverlight Spy 4 Preview 1 Gavin Wignall announced the Launch of Festive game built with Silverlight 4, hosted on Azure ... free to play. From SilverlightCream.com: Silverlight View Model (MVVM) - A Play In One Act Michael Washington has an interesting take on writing a blog post with this 'play' version of Silverlight View Models and Expression Blend with a heaping dose of Behaviors added in for flavoring. Build a Windows Phone Game in 3 days – Day 1 Samuel Jack is attempting to build a WP7 game in 3 days including downloading the tools and an XNA book... interesting to see where he's headed wth this venture. 4 of 10 - Make sure your finger can hit the target and text is legible Continuing with a series of tips from the folks reviewing apps for the marketplace via Alfred Astort is this number 4 -- touch target size and legible text. 5 of 10 - Give feedback on touch and progress within your UI Alfred Astort's number 5 is also up, and continues the touch discussion with this tip about giving the user feedback on their touch. Fantasia Painter Released for Windows Phone 7 + Tips Nokola took the release of his Fantasia Painter on WP& as an opportunity not only to blog about the fact that we can go buy it, but has a blog full of hints and tips that he gathered while working on it. Games for Windows Phone 7 Resources: Reducing Load Times, RPG Kit; Other Nokola also blogged about the release of the new games education pack, and gives up the cursor he uses in his videos after being asked... The simplest way to do design-time ViewModels with MVVM and Blend. Avi Pilosof attacks the design-time ViewModel issue in Blend with a 'no code' solution. Sharing resources and styles between projects in Silverlight Chris Klug is talking about sharing resources and styles across a large Silverlight project... near and dear to my heart at this moment. Dynamically Generating Controls in WPF and Silverlight Pete Brown has a post up that's generated some interest... creating controls at runtime... and he's demonstrating several different ways for both Silverlight and WPF #twitter for Windows Phone 7 protips (#wp7) Laurent Bugnion was posting these great tips for Twitter for WP7 and rolled all 16 of them up into a blog post... check them and the app out... Increasing touch surface (#wp7dev) Laurent Bugnion's most current post should be of great interest to WP7 devs... providing more touch surface for your user's fat fingers, err, I mean their fat fingerings :) ... great information and samples ... and interesting it is a fail point as listed by Alfred Astort above. Windows Phone Application Performance at Silverlight Firestarter This material from Jaime Rodriguez actually hit prior to his Firestarter presentation, but should be required reading for anyone doing a WP7 app... great Performance tips from the trenches... slide deck, cheat-sheet, and code. UpdateSourceTrigger on Windows Phone data bindings Another post from Jaime Rodriguez actually went through a couple revisions already.. how about a WP7 TextBox that fires notifications to the ViewModel when the text changes? ... would you like a behavior with that? Details on the Push Notification app limits Jaime Rodriguez has yet another required reading post up on Push Notification limits ... what it really entails and how you can be a good WP7 citizen by the way you program your app. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

  • Silverlight Cream for May 13, 2010 -- #861

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Jeff Prosise, DaveDev, Joe Zhou, Chris Eargle, John Papa(-2-, -3-), and David Anson(-2-). Shoutouts: In with the links I've listed below, Sigurd Snørteland also sent a link to this app he's working on which is actually pretty cool to see: ZuneLight. The code is not yet available. He also has a no-code demo of a Silverlight Media Center Pieter Voloshyn, Luiz Thadeu, and Jhun Iti have a very nice Silverlight image editor up: Thumba From SilverlightCream.com: WP7 - Silverlight on mobile Sigurd Snørteland submitted some links for me that have been translated to English from his blog. I hope the pages come out good because he's got a lot of good stuff on there. This one has a link to a presentation he did, and 4 projects you can load up in the emulator that he's converted to the phone: weather, worldclock, coverflow, and solitaire ... pretty cool... thanks for the links Sigurd! Understanding Page Orientation in Silverlight for Windows Phone Jeff Prosise has a really nice post up on page orientation in WP7 ... what it means to your app, how to detect it, and example code for what to do then... also love a quote by Jeff: "Silverlight for Windows Phone is the hottest thing since color TV" Why you should check out Expression Blend Behaviors when using Silverlight DaveDev has a post up describing Behaviors and why we should use them, plus tons of external links to resources, blogs, videos... all good stuff... Fiddler inspector for WCF Silverlight Polling Duplex and WCF RIA Joe Zhou announces and provides a link to a new Fiddler inspector that understands the framing in Polling Duplex and also raw binary xml and binary SOAP. Windows Phone Controls v0.7 Chris Eargle reports the release of Version 0.7 of the Windows Phone Controls project on CodePlex ... this includes a Pivot Control and a Panorama Control... both very nicely done. Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath John Papa responds to a user question and put up a nice post about binding to a ComboBox and then go from the selected item to some other property ... code included No More Boxes! Exploring the PathListBox (Silverlight TV #25) Silverlight TV 25 went up on Tuesday ... thought it was going to be Thursday?? anyway ... John Papa and Adam Kinney are discussing the PathListBox and looking at some cool demos thereof. Exposing SOAP, OData, and JSON Endpoints for RIA Services (Silverlight TV 26) Since today IS Thursday, we have a new Silverlight TV, number 26, and John Papa is chatting with Deepesh Mohnani of the WCF RIA Services team about exposing all sorts of endpoints... should be something in there for everybody :) Workaround for a Silverlight data binding bug affecting various scenarios - including DataGrid+ContextMenu David Anson details the rabbit-trail he and others on the team followed in response to a problem reported via Twitter where the binding on a DataGrid seemed off by a row(!) ... weird but true, validated, and SL3/4 are bug-for-bug compatible with this too! ... But David wouldn't leave us there.. he also has a workaround. Sharing the code for a simple Silverlight 4 REST-based cloud-oriented file management app for Azure and S3 David Anson had an opportunity to build an app he's wanted to build for a while and shares it with us: Blobstore -- a small, lightweight Silverlight 4 application that acts as a basic front-end for the Windows Azure Simple Data Storage and the Amazon Simple Storage Service (S3) -- and remember I said he shared the source :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Xorg does not see my monitor EDID

    - by sean farley
    Below is the output from my Xorg.0. X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 22.311] X Protocol Version 11, Revision 0 [ 22.311] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 22.311] Current Operating System: Linux sean-P55-USB3 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 22.311] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=0a34603e-aee9-44d1-8982-a5a5a38c3e4d ro quiet splash [ 22.311] Build Date: 29 August 2012 12:12:33AM [ 22.311] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 22.311] Current version of pixman: 0.24.4 [ 22.311] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 22.311] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 22.311] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 17 13:20:45 2012 [ 22.311] (==) Using config file: "/etc/X11/xorg.conf" [ 22.311] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 22.311] (==) No Layout section. Using the first Screen section. [ 22.311] (==) No screen section available. Using defaults. [ 22.311] (**) |-->Screen "Default Screen Section" (0) [ 22.311] (**) | |-->Monitor "<default monitor>" [ 22.311] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 22.311] (**) | |-->Device "Default Device" [ 22.311] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 22.311] (==) Automatically adding devices I have searched all over, and followed lots of dead ends and unanswered questions on this issue. I need to get this monitor recognised so I can use the native resolution of 1600x1200. The Nvidia driver in Windows has no problem with this. The monitor is an old Iiyama HM204DT A. Is there a way of configuring Xorg manually to get these working? I have tried xrandr but this will not work. Output:- sean@sean-P55-USB3:~$ xrandr Screen 0: minimum 8 x 8, current 1152 x 864, maximum 16384 x 16384 DVI-I-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 disconnected (normal left inverted right x axis y axis) Tried Nvidia Xorg.config: sean@sean-P55-USB3:~$ sudo nvidia-xconfig [sudo] password for sean: Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Default Device" must have a Driver line. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf' How do I insert a driver line? This is a bit of a pain as I want to use my Vectorworks cad program in a WinXP Vbox at 1600x1200 but all virtual drives are restricted to the host screen resolution. Do i need to manually create EIDI info in Xorg? I am slightly confused about how Xorg and Nvidia relate Please help

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • GCC 4.2.1 Compiling on Cygwin(Win7 64bit) for iPhone [closed]

    - by Kenneth Noland
    Hey This is going to take a long while to explain, but the short version is that I am currently attempting to compile the LLVM GCC frontend for ARMv7 to compile apps for the Cortex-A8(iPhone 3GS). I'm running into an error from LD when compiling libgcc(part of the gcc compilation process) that has been driving me mad! The command is this: /usr/llvm-gcc-4.2-2.8.source/build/./gcc/xgcc \ -B/usr/llvm-gcc-4.2_2.8.source/build/./gcc \ -B/usr/local/arm-apple-darwin/bin \ -B/usr/local/arm-apple-darwin/lib \ -isystem /usr/local/arm-apple-darwin/include \ -isystem /usr/local/arm-apple-darwin/sys-include \ -O2 -g -W -Wall -Wwrite-strings -wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-inline -dynamiclib -nodefaultlibs -W1,-dead_strip \ -marm \ -install_name /usr/local/arm-apple-darwin/lib/libgcc_s.1.dylib \ -single_module -o ./libgcc_s.1.dylib.tmp \ -W1,-exported_symbols_list,libgcc/./libgcc.map -compatibility_version 1 -current_version 1.0 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc \ ... long list of .o files ... \ -lc And the result is typically a lot of undefined references to malloc, free, exit, etc. which typically indicate that libc is not getting compiled in. After going through the list of errors that ld is throwing, I see at the top that it is attempting to pull in /usr/lib/libc.a and complains that it is not the correct platform. Okay, that makes sense, so I spent 5 minutes on google and found an answer. Turns out that if I copy the libSystem.dylib and rename it to libc.dylib, that should solve the problem, but it doesn't. I couldn't find a copy of that file on my phone, so I pulled it directly from the SDK. I then get this strange error: ld64: in /usr/local/arm-apple-darwin/lib/libc.dylib, can't re-map file, errno=22 At this point, I did everything I could think of. I grabbed a fresh copy of my /usr/lib folder from my iphone and confirmed that libSystem.dylib(and libSystem.B.dylib) wasn't there. I unpacked the raw .ipsw package for iOS 4.2.1 and once again, I could not find a copy of libSystem.dylib there either. I unpacked the iPhoneSDK and MacOS SDK and I managed to find a copy of it in both, but that error just kept persisting. I copied libSystem.dylib, libSystem.B.dylib, tried all sorts of combinations of renaming to libc.dylib and still nothing but errors. I can't find a way to get it to recognize the file and link against it. I also tried linking against the libc.a located in the iphone SDK and that didn't work either. I checked what ./xgcc was firing off, and it was my freshly built copy of arm-apple-darwin-ld64 which should be fine. A little bit of background here. I built LLVM+Clang 2.8 with no errors, and I rebuilt the ODCCTools with some light modifications to get it to compile on Cygwin(I'll post my changes in a patch along with a tutorial if I can get this to work). I also grabbed the iphone-dev "includes" and "csu" project and those completed successfully, although there really is no point to them since I can't get it to link against crt0.a. I'm running out of ideas here. Can anyone help me out on this?

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • Oracle at The Forrester Customer Intelligence and Marketing Leadership Forums

    - by Christie Flanagan
    The Forrester Customer Intelligence Forum and the Forrester Marketing Leadership Forums will soon be here.  This year’s events will be co-located on April 18-19 at the J.W. Marriott at the L.A. Live entertainment complex in downtown Los Angeles.  Last year’s Marketing Forum was quite memorable for me.  You see, while Forrester analysts and business marketers were busy mingling over at the Marriott, another marketing powerhouse was taking up residence a few feet away at The Staples Center.  That’s right folks. Lada Gaga was coming to town.  And, as I came to learn, it made perfect sense for Lady Gaga and her legions of fans to be sharing a small patch of downtown L.A. with marketing leaders from all over the world.  After all, whether you like Lady Gaga or not, what pop star in recent memory has done more to build herself into a brand and to create an engaging, social and interactive customer experience for her Little Monsters?  While Lady Gaga won’t be back in town for this year’s Forrester events, there are still plenty of compelling reasons to make the trip out to Los Angeles.   The theme for The Forrester Customer Intelligence and Marketing Leadership Forums this year is “From Cool To Critical: Creating Engagement In The Age Of The Customer” and will tackle the important questions about how marketers can survive and thrive in the age of the empowered customer: •    How can you assess consumer uptake of new innovations?•    How do you build deep customer knowledge to drive competitive advantage?•    How do you drive deep, personalized customer engagement?•    What is more valuable — eyeballs or engagement?•    How do business customers engage in new media types?•    How can you tie social data to corporate data?•    Who should lead the movement to customer obsession?•    How should you shift your planning and measurement approaches to accommodate more data and a higher signal-to-noise ratio?•    What role does technology play in customizing and synchronizing marketing efforts across channels?As a platinum sponsor of the event, there will be a numbers of ways to interact with Oracle while you’re attending the Forums.  Here are some of the highlights:Oracle Speaking SessionThursday, April 19, 9:15am – 9:55amMaximize Customer Engagement and Retention with Integrated Marketing & LoyaltyMelissa Boxer, Vice President, Oracle CRM Marketing & LoyaltyCustomers expect to interact with your company, brand and products in more ways than ever before.   New devices and channels, such as mobile, social and web, are creating radical shifts in the customer buying process and the ways your company can reach and communicate with existing and potential customers. While Marketing's objectives (attract, convert, retain) remain fundamentally the same, your approach and tools must adapt quickly to succeed in this more complex, cross-channel world. Hear how leading brands are using Oracle's integrated marketing and loyalty solutions to maximize customer engagement and retention through better planning, execution, and measurement of synchronized cross-channel marketing initiatives.Solution ShowcaseWednesday, April 1810:20am – 11:50am 12:30pm – 1:30pm2:55pm – 3:40pmThursday, April 199:55am – 10:40am12:00pm – 1:00pmSolution Showcase & Networking ReceptionWednesday, April 185:10pm – 6:20pmBe sure to follow the #webcenter hashtag for updates on these events.  And for a more considered perspective on what Lady Gaga can teach businesses about branding and customer experience, check out Denise Lee Yohn’s post, Lessons from Lady Gaga from the Brand as Business Bites blog.

    Read the article

  • MySQL – Video Course – MySQL Backup and Recovery Fundamentals

    - by Pinal Dave
    Data is the one of the most crucial things for any organization and keeping data safe is the biggest challenge for any DBA. This is true for any organizations. Think about the scenario that you have a database which is extremely important and suddenly you accidently delete the most important table from that database. I am sure this is a very difficult time. In times like this people often get stressed or just make even second mistake. In my career of 10 years I have done often this mistake and often got stressed out due to un-availability of the database backup. In the SQL Server field, we have plenty of the help on this subject, but in MySQL domain there is not enough help. For the same reason I have build this MySQL course on Backup and Recovery. Course Outline Data is very important to any application and business. It is very important that every business plan for data safety. Database backup strategies are often discussed after the disaster has already happened. In this introductory course we will explore a few of the basic backup strategies every business should implement for data safely. We will explore how we can recover our server quickly after any unfriendly incident to our MySQL database. Click to View Course Here are various important aspects which we have discussed in this course. How to take backup of single database? How to take backup of multiple database? How to backup various database objects? How to restore a single database? How to restore multiple databases? How to use MySQL Workbench for Backup and Restore? How to restore Point in Time for any database? What is the best time to backup? How to copy database from one server to another server? All of the above concepts and many more subjects are covered in the MySQL Backup and Recovery Fundamentals course. It is available on Pluralsight. Scenarios As learning about Backup and Recovery can be very much boring, I decided to create two fictitious characters and demonstrate the entire course based on their conversation. The story is about Mike and Rahul. Mike is Sr. Database administrator in USA and Rahul is an intern in India. Rahul aspires to become a senior database administrator and this is a story about his challenges and how he overcomes those challenges. I had a great time to build this course and I have got very good feedback on this course. I encourage all of you to attempt to learn MySQL Backup and Recovery Fundamental course with this innovative effort. It will be very valuable to know your feedback. You will need a valid Pluralsight subscription to watch this course. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • New Training and Support Center Coming Soon!

    - by Ruth
    The CRM On Demand Training and Support Center is getting a face lift. In May 2010 we will unveil the new and improved layout, look and feel, and even some new content. Some of you told us loud and clear that you wanted an easier way to find our training courses and other important information. Well, here you are: Immediately you see the look and feel has changed and things have moved around a bit. You may ask, "How can I find the training catalog? Service requests? Downloads?" There are a few ways to find what you're looking for. You may use the search box to find training, quick guides, downloads, best practices, FAQs and more. You may also click the tabs or links in the blue bar, like Browse Training, to browse other documents and information. Here is a brief outline of the tabs and links that will help as you navigate this new tool: The Support tab provides alerts and notifications specific to your application environment. The Get Started tab is organized by role and contains links to resources aimed at helping you get the most out of your first 30 days with CRM On Demand. The Learn More tab outlines information in key topic areas, like administration, integration, and reports. Go to this tab to get the resources you need to move beyond the basics. The Release Information tab contains information specific to the current and upcoming releases of CRM On Demand. Access this tab to learn about and prepare for upgrades to your CRM On Demand application. The Best Practices tab contains a compilation of knowledge gained by experts that work with CRM On Demand day in and day out. Access this knowledge to benefit from their vast experience. The Communities tab offers connections to others in the CRM On Demand community through forums, communities, blogs, and more. The Browse training link opens the training catalog.Take a look at the instructor-led training, Webinars, quick guides, use cases, and tools available to you. The Browse Knowledge link takes you to our knowledge base where you can get answers to frequently asked questions. The Submit a Service Request link directs you to My Oracle Support where you can log a service request. The steps in that process have not changed. The Web Services Library provides simple APIs and a link to Oracle Sample Code where you can get samples that can help you build custom integrations. The Add-On Applications link allows access to our downloadable applications that allow you to extend the functionality of CRM On Demand. The Templates and Tools link provides access to resources that can help you design and build CRM On Demand to meet your company's specific needs. A lot has changed and I know it is a lot to take in. To help you out, we have a printable quick guide that you can use during this transition. As always, let us know what you think: [email protected].

    Read the article

  • InSync12 and Australia Visits: UX is Global, Regional, Everywhere!

    - by ultan o'broin
    I attended the Australian Oracle User Group (AUSOUG) and Quest International User Group's InSync12 event in Melbourne, Australia: the user group conference for Oracle products in the ANZ region. I demoed Oracle Fusion Applications and then presented how Oracle crafted the world class Fusion Apps user experience (UX). I explained about the Oracle user experience design pattern strategy of uptake for all apps, not just Fusion, and what our UX pattern externalization strategy means for customers, partners, and ADF developers. A great conference, lots of energy, the InSync12 highlights for me were Oracle's Senior Vice President Cliff Godwin’s fast-moving Oracle E-Business Suite (EBS) roadshow with the killer Oracle Endeca user experience uptake, and Oracle ADF product outreachmeister Chris Muir’s (@chriscmuir) session on Oracle ADF Mobile solution and his hands-on mobile app development showing how existing ADF/JDev skills can build a secure, code once-deploy-to-many-device hybrid app solution in minutes. Cliff Godwin shows off the Oracle Endeca integration with Oracle E-Business Suite. Chris Muir talked the talk and then walked the walked with Oracle ADF Mobile. Applications UX was mixing it up with the crowd at InSync12 too, showing off cool mobile UX solutions, gathering data for future innovations, and engaging with EBS, JD Edwards, and PeopleSoft apps customers and partners. User conferences such as InSync12 are an important part of our Oracle Applications UX user-centered design process, giving real apps users the opportunity to make real inputs and a way for us to watch and to listen to their needs and wants and get views on current and emerging UX too. Eric Stilan (@icondaddy) of Applications UX uses an iPad to gather feedback on the latest UX designs from conference attendees. While in Melbourne, I also visited impressive Oracle partner, Callista for a major ADF and UX pow-wow, and was the er, star of a very proactive event hosted by another partner Park Lane Information Technology (coordinated by Bambi Price (@bambiprice) of ODTUG) where I explained what UX is about, and how partner and customers can engage, participate and deploy that Applications UX scientific insight to advantage for their entire business. I also paired up with Oracle Australia in Sydney to visit key customers while there, and back at Oracle in Melbourne I spoke with sales consultants and account managers about regional opportunities and UX strategy, and came away with an understanding of what makes the Oracle market tick in Australia. Mobile worker solution development and user experience is hot news in Australia, and this was a great opportunity to team up with Chris Muir and show how the alignment of the twin stars of UX design patterns and ADF technology enables developers to make great-looking, usable apps that really sparkle. Our UX design patterns--or functional (UI) patterns, to use the developer world language--means that developers now have not only a great tool set to build apps on Oracle ADF/FMW but proven, tested usability solutions to solve common problems they can apply in the IDE too. In all, a whirlwind UX visit, packed with events and delivery opportunities, and all too short a time in the wonderful city of Melbourne. I need to get back there soon! For those who need a reminder, there's a website explaining how to get involved with, and participate in, Applications User Experience (including the Oracle Usability Advisory Board) events and programs. Thank you to AUSOUG, Quest, InSync, Callista, Park Lane IT, everyone at Oracle Australia, Chris Muir, and all the other people who came together to make this a productive visit. Stay tuned for more UX developments and engagements in the region on the Oracle VoX blog and Usable Apps website too!

    Read the article

  • Windows 8 Apps with HTML5 and JavaScript

    - by Stephen.Walther
    Last week, I finished writing Windows 8 Apps with HTML5 and JavaScript – Yikes! That is a long title. This book is all about writing apps for Windows 8 which can be added to the Windows Store. The book focuses on building apps using HTML5 and JavaScript. If you are already comfortable building websites, then building Windows Store apps is not a huge leap.  I explain how you can create productivity apps, like a Task List app, and games, like a simple arcade game. I also explain how you can publish your app to the Windows Store and make money. To celebrate the release of Windows 8, my publisher is offering a huge 40% discount on the book until November 30, 2012. If you want to take advantage of this discount, follow the link below and enter the discount code WINDEV40 during checkout. http://www.informit.com/promotions/promotion.aspx?promo=139036&walther So what’s in the book?  Here’s an overview of each of the chapters: Chapter 1 – Building Windows Store Apps Contains a walkthrough of creating a super simple Windows app for taking pictures from your webcam. Explains how to publish your app to the Windows Store. Chapter 2 – WinJS Fundamentals Provides an overview of the Windows Library for JavaScript which is the Microsoft library for creating Windows Store apps with JavaScript. Chapter 3 – Observables, Bindings, and Templates You learn how to display a list of items using a template. For example, you learn how to create a template which can be used to display a list of products. Chapter 4 – Using WinJS Controls Overview of the core set of JavaScript controls included with the WinJS library. You learn how to use the Tooltip, ToggleSwitch, Rating, DatePicker, TimePicker, and FlipView controls. Chapter 5 – Creating Forms This chapter explains how to take advantage of HTML5 forms to display specialized keyboards and perform form validation. Chapter 6 – Menus and Flyouts You learn how to display popups, menus, and toolbars using the JavaScript controls included with the WinJS library. Chapter 7 – Using the ListView Control This entire chapter is devoted to the ListView control which is the most important control in the WinJS library. You can use the ListView control to display, sort, filter, and edit a list of items. Chapter 8 – Creating Data Sources Learn how to use a ListView control to display data from the file system, a web service, and IndexedDB. Chapter 9 – App Events and States This chapter explains the standard application events which are raised in a Windows Store app such as the activated and checkpoint events. You also learn how to build apps which adapt automatically to different view states such as portrait and landscape. Chapter 10 – Page Fragments and Navigation This chapter discusses two subjects: You learn how to create custom WinJS controls with Page Controls and you learn how to build apps with multiple pages.  Chapter 11 – Using the Live Connect API Learn how to use Windows Live Services to authenticate users, interact with SkyDrive, and retrieve user profile information (such as a user’s birthday or profile picture). Chapter 12 – Graphics and Games This chapter is devoted to building the Brain Eaters app which is a simple arcade game. Navigate a maze and eat all of the food pellets while avoiding the brain-eating zombies to win the game. Learn how to create the game using HTML5 Canvas.   If you want to buy the book, remember to use the magic discount code WINDEV40 and visit the following link: http://www.informit.com/promotions/promotion.aspx?promo=139036&walther

    Read the article

  • JustMock is here !!

    - by mehfuzh
    As announced earlier by Hristo Kosev at Telerik blogs , we have started giving out JustMock builds from today. This is the first of early builds before the official Q2 release and we are pretty excited to get your feedbacks. Its pretty early to say anything on it. It actually depends on your feedback. To add few, with JustMock we tried to build a mocking tool with simple and intuitive syntax as possible excluding more and more noises and avoiding any smell that can be made to your code [We are still trying everyday] and we want to make the tool even better with your help. JustMock can be used to mock virtually anything. Moreover, we left an option open that it can be used to reduce / elevate the features  just though a single click. We tried to make a strong API and make stuffs fluent and guided as possible so that you never have the chance to get de-railed. Our syntax is AAA (Arrange – Act – Assert) , we don’t believe in Record – Reply model which some of the smarter mocking tools are planning to remove from their coming release or even don’t have [its always fun to lean from each other]. Overall more signals equals more complexity , reminds me of 37 signals :-). Currently, here are the things you can do with JustMock ( will cover more in-depth in coming days) Proxied mode Mock interfaces and class with virtuals Mock properties that includes indexers Set raise event for specific calls Use matchers to control mock arguments Assert specific occurrence of a mocked calls. Assert using matchers Do recursive mocks Do Sequential mocking ( same method with argument returns different values or perform different tasks) Do strict mocking (by default and i prefer loose , so that i can use it as stubs) Elevated mode Mock static calls Mock final class Mock sealed classes Mock Extension methods Partially mock a  class member directly using Mock.Arrange Mock MsCorlib (we will support more and more members in coming days) , currently we support FileInfo, File and DateTime. These are few, you need to take a look at the test project that is provided with the build to find more [Along with the document]. Also, one of feature that will i will be using it for my next OS projects is the ability to run it separately in  proxied mode which makes it easy to redistribute and do some personal development in a more DI model and my option to elevate as it go.   I’ve surely forgotten tons of other features to mention that i will cover time but  don’t for get the URL : www.telerik.com/justmock   Finally a little mock code:   var lvMock = Mock.Create<ILoveJustMock>();    // set your goal  Mock.Arrange(() => lvMock.Response(Arg.Any<string>())).Returns((int result) => result);    //perform  string ret =  lvMock.Echo("Yes");    Assert.Equal(ret, "Yes");  // make sure everything is fine  Mock.Assert(() => lvMock.Echo("Yes"), Occurs.Once());   Hope that helps to get started,  will cover if not :-).

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Extending Oracle CEP with Predictive Analytics

    - by vikram.shukla(at)oracle.com
    Introduction: OCEP is often used as a business rules engine to execute a set of business logic rules via CQL statements, and take decisions based on the outcome of those rules. There are times where configuring rules manually is sufficient because an application needs to deal with only a small and well-defined set of static rules. However, in many situations customers don't want to pre-define such rules for two reasons. First, they are dealing with events with lots of columns and manually crafting such rules for each column or a set of columns and combinations thereof is almost impossible. Second, they are content with probabilistic outcomes and do not care about 100% precision. The former is the case when a user is dealing with data with high dimensionality, the latter when an application can live with "false" positives as they can be discarded after further inspection, say by a Human Task component in a Business Process Management software. The primary goal of this blog post is to show how this can be achieved by combining OCEP with Oracle Data Mining® and leveraging the latter's rich set of algorithms and functionality to do predictive analytics in real time on streaming events. The secondary goal of this post is also to show how OCEP can be extended to invoke any arbitrary external computation in an RDBMS from within CEP. The extensible facility is known as the JDBC cartridge. The rest of the post describes the steps required to achieve this: We use the dataset available at http://blogs.oracle.com/datamining/2010/01/fraud_and_anomaly_detection_made_simple.html to showcase the capabilities. We use it to show how transaction anomalies or fraud can be detected. Building the model: Follow the self-explanatory steps described at the above URL to build the model.  It is very simple - it uses built-in Oracle Data Mining PL/SQL packages to cleanse, normalize and build the model out of the dataset.  You can also use graphical Oracle Data Miner®  to build the models. To summarize, it involves: Specifying which algorithms to use. In this case we use Support Vector Machines as we're trying to find anomalies in highly dimensional dataset.Build model on the data in the table for the algorithms specified. For this example, the table was populated in the scott/tiger schema with appropriate privileges. Configuring the Data Source: This is the first step in building CEP application using such an integration.  Our datasource looks as follows in the server config file.  It is advisable that you use the Visualizer to add it to the running server dynamically, rather than manually edit the file.    <data-source>         <name>DataMining</name>         <data-source-params>             <jndi-names>                 <element>DataMining</element>             </jndi-names>             <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol>         </data-source-params>         <connection-pool-params>             <credential-mapping-enabled></credential-mapping-enabled>             <test-table-name>SQL SELECT 1 from DUAL</test-table-name>             <initial-capacity>1</initial-capacity>             <max-capacity>15</max-capacity>             <capacity-increment>1</capacity-increment>         </connection-pool-params>         <driver-params>             <use-xa-data-source-interface>true</use-xa-data-source-interface>             <driver-name>oracle.jdbc.OracleDriver</driver-name>             <url>jdbc:oracle:thin:@localhost:1522:orcl</url>             <properties>                 <element>                     <value>scott</value>                     <name>user</name>                 </element>                 <element>                     <value>{Salted-3DES}AzFE5dDbO2g=</value>                     <name>password</name>                 </element>                                 <element>                     <name>com.bea.core.datasource.serviceName</name>                     <value>oracle11.2g</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceVersion</name>                     <value>11.2.0</value>                 </element>                 <element>                     <name>com.bea.core.datasource.serviceObjectClass</name>                     <value>java.sql.Driver</value>                 </element>             </properties>         </driver-params>     </data-source>   Designing the EPN: The EPN is very simple in this example. We briefly describe each of the components. The adapter ("DataMiningAdapter") reads data from a .csv file and sends it to the CQL processor downstream. The event payload here is same as that of the table in the database (refer to the attached project or do a "desc table-name" from a SQL*PLUS prompt). While this is for convenience in this example, it need not be the case. One can still omit fields in the streaming events, and need not match all columns in the table on which the model was built. Better yet, it does not even need to have the same name as columns in the table, as long as you alias them in the USING clause of the mining function. (Caveat: they still need to draw values from a similar universe or domain, otherwise it constitutes incorrect usage of the model). There are two things in the CQL processor ("DataMiningProc") that make scoring possible on streaming events. 1.      User defined cartridge function Please refer to the OCEP CQL reference manual to find more details about how to define such functions. We include the function below in its entirety for illustration. <?xml version="1.0" encoding="UTF-8"?> <jdbcctxconfig:config     xmlns:jdbcctxconfig="http://www.bea.com/ns/wlevs/config/application"     xmlns:jc="http://www.oracle.com/ns/ocep/config/jdbc">        <jc:jdbc-ctx>         <name>Oracle11gR2</name>         <data-source>DataMining</data-source>               <function name="prediction2">                                 <param name="CQLMONTH" type="char"/>                      <param name="WEEKOFMONTH" type="int"/>                      <param name="DAYOFWEEK" type="char" />                      <param name="MAKE" type="char" />                      <param name="ACCIDENTAREA"   type="char" />                      <param name="DAYOFWEEKCLAIMED"  type="char" />                      <param name="MONTHCLAIMED" type="char" />                      <param name="WEEKOFMONTHCLAIMED" type="int" />                      <param name="SEX" type="char" />                      <param name="MARITALSTATUS"   type="char" />                      <param name="AGE" type="int" />                      <param name="FAULT" type="char" />                      <param name="POLICYTYPE"   type="char" />                      <param name="VEHICLECATEGORY"  type="char" />                      <param name="VEHICLEPRICE" type="char" />                      <param name="FRAUDFOUND" type="int" />                      <param name="POLICYNUMBER" type="int" />                      <param name="REPNUMBER" type="int" />                      <param name="DEDUCTIBLE"   type="int" />                      <param name="DRIVERRATING"  type="int" />                      <param name="DAYSPOLICYACCIDENT"   type="char" />                      <param name="DAYSPOLICYCLAIM" type="char" />                      <param name="PASTNUMOFCLAIMS" type="char" />                      <param name="AGEOFVEHICLES" type="char" />                      <param name="AGEOFPOLICYHOLDER" type="char" />                      <param name="POLICEREPORTFILED" type="char" />                      <param name="WITNESSPRESNT" type="char" />                      <param name="AGENTTYPE" type="char" />                      <param name="NUMOFSUPP" type="char" />                      <param name="ADDRCHGCLAIM"   type="char" />                      <param name="NUMOFCARS" type="char" />                      <param name="CQLYEAR" type="int" />                      <param name="BASEPOLICY" type="char" />                                     <return-component-type>char</return-component-type>                                                      <sql><![CDATA[             SELECT to_char(PREDICTION_PROBABILITY(CLAIMSMODEL, '0' USING *))               AS probability             FROM (SELECT  :CQLMONTH AS MONTH,                                            :WEEKOFMONTH AS WEEKOFMONTH,                          :DAYOFWEEK AS DAYOFWEEK,                           :MAKE AS MAKE,                           :ACCIDENTAREA AS ACCIDENTAREA,                           :DAYOFWEEKCLAIMED AS DAYOFWEEKCLAIMED,                           :MONTHCLAIMED AS MONTHCLAIMED,                           :WEEKOFMONTHCLAIMED,                             :SEX AS SEX,                           :MARITALSTATUS AS MARITALSTATUS,                            :AGE AS AGE,                           :FAULT AS FAULT,                           :POLICYTYPE AS POLICYTYPE,                            :VEHICLECATEGORY AS VEHICLECATEGORY,                           :VEHICLEPRICE AS VEHICLEPRICE,                           :FRAUDFOUND AS FRAUDFOUND,                           :POLICYNUMBER AS POLICYNUMBER,                           :REPNUMBER AS REPNUMBER,                           :DEDUCTIBLE AS DEDUCTIBLE,                            :DRIVERRATING AS DRIVERRATING,                           :DAYSPOLICYACCIDENT AS DAYSPOLICYACCIDENT,                            :DAYSPOLICYCLAIM AS DAYSPOLICYCLAIM,                           :PASTNUMOFCLAIMS AS PASTNUMOFCLAIMS,                           :AGEOFVEHICLES AS AGEOFVEHICLES,                           :AGEOFPOLICYHOLDER AS AGEOFPOLICYHOLDER,                           :POLICEREPORTFILED AS POLICEREPORTFILED,                           :WITNESSPRESNT AS WITNESSPRESENT,                           :AGENTTYPE AS AGENTTYPE,                           :NUMOFSUPP AS NUMOFSUPP,                           :ADDRCHGCLAIM AS ADDRCHGCLAIM,                            :NUMOFCARS AS NUMOFCARS,                           :CQLYEAR AS YEAR,                           :BASEPOLICY AS BASEPOLICY                 FROM dual)                 ]]>         </sql>        </function>     </jc:jdbc-ctx> </jdbcctxconfig:config> 2.      Invoking the function for each event. Once this function is defined, you can invoke it from CQL as follows: <?xml version="1.0" encoding="UTF-8"?> <wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">   <processor>     <name>DataMiningProc</name>     <rules>        <query id="q1"><![CDATA[                     ISTREAM(SELECT S.CQLMONTH,                                   S.WEEKOFMONTH,                                   S.DAYOFWEEK, S.MAKE,                                   :                                         S.BASEPOLICY,                                    C.F AS probability                                                 FROM                                 StreamDataChannel [NOW] AS S,                                 TABLE(prediction2@Oracle11gR2(S.CQLMONTH,                                      S.WEEKOFMONTH,                                      S.DAYOFWEEK,                                       S.MAKE, ...,                                      S.BASEPOLICY) AS F of char) AS C)                       ]]></query>                 </rules>               </processor>           </wlevs:config>   Finally, the last stage in the EPN prints out the probability of the event being an anomaly. One can also define a threshold in CQL to filter out events that are normal, i.e., below a certain mark as defined by the analyst or designer. Sample Runs: Now let's see how this behaves when events are streamed through CEP. We use only two events for brevity, one normal and other one not. This is one of the "normal" looking events and the probability of it being anomalous is less than 60%. Event is: eventType=DataMiningOutEvent object=q1  time=2904821976256 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=300, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.58931702982118561 isTotalOrderGuarantee=true\nAnamoly probability: .58931702982118561 However, the following event is scored as an anomaly with a very high probability of  89%. So there is likely to be something wrong with it. A close look reveals that the value of "deductible" field (10000) is not "normal". What exactly constitutes normal here?. If you run the query on the database to find ALL distinct values for the "deductible" field, it returns the following set: {300, 400, 500, 700} Event is: eventType=DataMiningOutEvent object=q1  time=2598483773496 S.CQLMONTH=Dec, S.WEEKOFMONTH=5, S.DAYOFWEEK=Wednesday, S.MAKE=Honda, S.ACCIDENTAREA=Urban, S.DAYOFWEEKCLAIMED=Tuesday, S.MONTHCLAIMED=Jan, S.WEEKOFMONTHCLAIMED=1, S.SEX=Female, S.MARITALSTATUS=Single, S.AGE=21, S.FAULT=Policy Holder, S.POLICYTYPE=Sport - Liability, S.VEHICLECATEGORY=Sport, S.VEHICLEPRICE=more than 69000, S.FRAUDFOUND=0, S.POLICYNUMBER=1, S.REPNUMBER=12, S.DEDUCTIBLE=10000, S.DRIVERRATING=1, S.DAYSPOLICYACCIDENT=more than 30, S.DAYSPOLICYCLAIM=more than 30, S.PASTNUMOFCLAIMS=none, S.AGEOFVEHICLES=3 years, S.AGEOFPOLICYHOLDER=26 to 30, S.POLICEREPORTFILED=No, S.WITNESSPRESENT=No, S.AGENTTYPE=External, S.NUMOFSUPP=none, S.ADDRCHGCLAIM=1 year, S.NUMOFCARS=3 to 4, S.CQLYEAR=1994, S.BASEPOLICY=Liability, probability=.89171554529576691 isTotalOrderGuarantee=true\nAnamoly probability: .89171554529576691 Conclusion: By way of this example, we show: real-time scoring of events as they flow through CEP leveraging Oracle Data Mining.how CEP applications can invoke complex arbitrary external computations (function shipping) in an RDBMS.

    Read the article

  • SQL SERVER – UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL

    - by pinaldave
    I often see developers trying following syntax while using ORDER BY. SELECT Columns FROM TABLE1 ORDER BY Columns UNION ALL SELECT Columns FROM TABLE2 ORDER BY Columns However the above query will return following error. Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword ‘ORDER’. It is not possible to use two different ORDER BY in the UNION statement. UNION returns single resultsetand as per the Logical Query Processing Phases. However, if your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same resultset you can add an additional static column and order by that column. Let us re-create the same scenario. First create two tables and populated with sample data. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO If we SELECT the data from both the table using UNION ALL . -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO We will get the data in following order. However, our requirement is to get data in following order. If we need data ordered by Column1 we can ORDER the resultset ordered by Column1. -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO Now to get the data in independently sorted in UNION ALL let us add additional column OrderKey and use ORDER BY  on that column. I think the description does not do proper justice let us see the example here. -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO The above query will give the desired result. Now do not forget to clean up the database by running the following script. -- Clean up DROP TABLE t1; DROP TABLE t2; GO Here is the complete script used in this example. USE tempdb GO -- Create table CREATE TABLE t1 (ID INT, Col1 VARCHAR(100)); CREATE TABLE t2 (ID INT, Col1 VARCHAR(100)); GO -- Sample Data Build INSERT INTO t1 (ID, Col1) SELECT 1, 'Col1-t1' UNION ALL SELECT 2, 'Col2-t1' UNION ALL SELECT 3, 'Col3-t1'; INSERT INTO t2 (ID, Col1) SELECT 3, 'Col1-t2' UNION ALL SELECT 2, 'Col2-t2' UNION ALL SELECT 1, 'Col3-t2'; GO -- SELECT without ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 GO -- SELECT with ORDER BY SELECT ID, Col1 FROM t1 UNION ALL SELECT ID, Col1 FROM t2 ORDER BY ID GO -- SELECT with ORDER BY - with ORDER KEY SELECT ID, Col1, 'id1' OrderKey FROM t1 UNION ALL SELECT ID, Col1, 'id2' OrderKey FROM t2 ORDER BY OrderKey, ID GO -- Clean up DROP TABLE t1; DROP TABLE t2; GO I am sure there are many more ways to achieve this, what method would you use if you have to face the similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >