Search Results

Search found 17076 results on 684 pages for 'nightly build'.

Page 649/684 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Microsoft TypeScript : A Typed Superset of JavaScript

    - by shiju
    JavaScript is gradually becoming a ubiquitous programming language for the web, and the popularity of JavaScript is increasing day by day. Earlier, JavaScript was just a language for browser. But now, we can write JavaScript apps for browser, server and mobile. With the advent of Node.js, you can build scalable, high performance apps on the server with JavaScript. But many developers, especially developers who are working with static type languages, are hating the JavaScript language due to the lack of structuring and the maintainability problems of JavaScript. Microsoft TypeScript is trying to solve some problems of JavaScript when we are building scalable JavaScript apps. Microsoft TypeScript TypeScript is Microsoft's solution for writing scalable JavaScript programs with the help of Static Types, Interfaces, Modules and Classes along with greater tooling support. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. This would be more productive for developers who are coming from static type languages. You can write scalable JavaScript  apps in TypeScript with more productive and more maintainable manner, and later you can compiles to plain JavaScript which will be run on any browser and any OS. TypeScript will work with browser based JavaScript apps and JavaScript apps that following CommonJS specification. You can use TypeScript for building HTML 5 apps, Node.JS apps, WinRT apps. TypeScript is providing better tooling support with Visual Studio, Sublime Text, Vi, Emacs. Microsoft has open sourced its TypeScript languages on CodePlex at http://typescript.codeplex.com/    Install TypeScript You can install TypeScript compiler as a Node.js package via the NPM or you can install as a Visual Studio 2012 plug-in which will enable you better tooling support within the Visual Studio IDE. Since TypeScript is distributed as a Node.JS package, and it can be installed on other OS such as Linux and MacOS. The following command will install TypeScript compiler via an npm package for node.js npm install –g typescript TypeScript provides a Visual Studio 2012 plug-in as MSI file which will install TypeScript and also provides great tooling support within the Visual Studio, that lets the developers to write TypeScript apps with greater productivity and better maintainability. You can download the Visual Studio plug-in from here Building JavaScript  apps with TypeScript You can write typed version of JavaScript programs with TypeScript and then compiles it to plain JavaScript code. The beauty of the TypeScript is that it is already JavaScript and normal JavaScript programs are valid TypeScript programs, which means that you can write normal  JavaScript code and can use typed version of JavaScript whenever you want. TypeScript files are using extension .ts and this will be compiled using a compiler named tsc. The following is a sample program written in  TypeScript greeter.ts 1: class Greeter { 2: greeting: string; 3: constructor (message: string) { 4: this.greeting = message; 5: } 6: greet() { 7: return "Hello, " + this.greeting; 8: } 9: } 10:   11: var greeter = new Greeter("world"); 12:   13: var button = document.createElement('button') 14: button.innerText = "Say Hello" 15: button.onclick = function() { 16: alert(greeter.greet()) 17: } 18:   19: document.body.appendChild(button) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above program is compiling with the TypeScript compiler as shown in the below picture The TypeScript compiler will generate a JavaScript file after compiling the TypeScript program. If your TypeScript programs having any reference to other TypeScript files, it will automatically generate JavaScript files for the each referenced files. The following code block shows the compiled version of plain JavaScript  for the above greeter.ts greeter.js 1: var Greeter = (function () { 2: function Greeter(message) { 3: this.greeting = message; 4: } 5: Greeter.prototype.greet = function () { 6: return "Hello, " + this.greeting; 7: }; 8: return Greeter; 9: })(); 10: var greeter = new Greeter("world"); 11: var button = document.createElement('button'); 12: button.innerText = "Say Hello"; 13: button.onclick = function () { 14: alert(greeter.greet()); 15: }; 16: document.body.appendChild(button); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Tooling Support with Visual Studio TypeScript is providing a plug-in for Visual Studio which will provide an excellent support for writing TypeScript  programs within the Visual Studio. The following screen shot shows the Visual Studio template for TypeScript apps   The following are the few screen shots of Visual Studio IDE for TypeScript apps. Summary TypeScript is Microsoft's solution for writing scalable JavaScript apps which will solve lot of problems involved in larger JavaScript apps. I hope that this solution will attract lot of developers who are really looking for writing maintainable structured code in JavaScript, without losing any productivity. TypeScript lets developers to write JavaScript apps with the help of Static Types, Interfaces, Modules and Classes and also providing better productivity. I am a passionate developer on Node.JS and would definitely try to use TypeScript for building Node.JS apps on the Windows Azure cloud. I am really excited about to writing Node.JS apps by using TypeScript, from my favorite development IDE Visual Studio. You can follow me on twitter at @shijucv

    Read the article

  • MySQL Connect: What to Expect From the Wondrous Land of MySQL Cluster

    - by Mat Keep
    The MySQL Connect conference is only a couple of weeks away, with MySQL engineers, support teams, consultants and community aces busy putting the final touches to their talks. There will be many exciting new announcements and sharing of best practices at the conference, covering the range of MySQL technologies. MySQL Cluster will a big part of this, so I wanted to share some key sessions for those of you who plan on attending, as well as some resources for those who are not lucky enough to be able to make the trip, but who can't afford to miss the key news. Of course, this is no substitute to actually being there….and the good news is that registration is still open ;-) Roadmap: Whats New in MySQL Cluster Saturday 29th, 1300-1400, in Golden Gate room 5.                                                                                        Bernd Ocklin, director of MySQL Cluster development, and myself will be taking a look at what follows the latest MySQL Cluster 7.2 release. I don't want to give to much away - lets just say its not often you can add powerful new functionality to a product while at the same time making life radically simpler for its users. For those not making it to the Conference, a live webinar repeating the talk is scheduled for Thursday 25th October at 09.00 pacific time. Hold the date, registration will be open for that soon and published to our MySQL Webinars page Best Practices Getting Started with MySQL Cluster, Hands-On Lab Saturday 29th, 1600-1700, in Plaza Room A.                                                              Santo Leto, one of our lead MySQL Cluster support engineers, regularly works with users new to MySQL Cluster, assisting them in installation, configuration, scaling, etc. In this lab, Santo will share best-practices in getting started. Delivering Breakthrough Performance with MySQL Cluster Saturday 29th, 1730-1830, in Golden Gate room 5. Frazer Clement, lead MySQL Cluster software engineer, will demonstrate how to translate the awesome Cluster benchmarks (remember 1 BILLION UPDATEs per minute ?!) into real-world performance. You can also get some best practices from our new MySQL Cluster performance guide  MySQL Cluster BoF Saturday 29th, 1900-2000, room Golden Gate 5.                                                                                                           Come and get a demonstration of new tools for the installation and configuration of MySQL Cluster, and spend time with the engineering team discussing any questions or issues you may have. Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster Sunday 30th, 1145 - 1245, in Golden Gate room 7.   In this session, JD Duncan and Andrew Morgan will present how to get started with both Memcached and new NoSQL APIs. JD and I recently ran a webinar demonstrating how to build simple Twitter-like services with Memcached and MySQL Cluster. The replay is available for download.  Case Studies: MySQL Cluster @ El Chavo, Latin America’s #1 Facebook Game Sunday 30th, 1745 - 1845, in Golden Gate room 4.                             Playful Play deployed MySQL Cluster CGE to power their market leading social game. This session will discuss the challenges they faced, why they selected MySQL Cluster and their experiences to date. You can read more about Playful Play and MySQL Cluster here  A Journey into NoSQLand: MySQL’s NoSQL Implementation Sunday 30th, 1345 - 1445, in Golden Gate room 4.                                          Lig Turmelle, web DBA at Kaplan Professional and esteemed Oracle Ace, will discuss her experiences working with the NoSQL interfaces for both MySQL Cluster and InnoDB Evaluating MySQL HA Alternatives Saturday 29th, 1430-1530, room Golden Gate 5                                                                                   Henrik Ingo, former member of the MySQL sales engineering team, will provide an overview of various HA technologies for MySQL, starting with replication, progressing to InnoDB, Galera and MySQL Cluster What about the other stuff? Of course MySQL Connect has much, much more than MySQL Cluster. There will be lots on replication (which I'll blog about soon), MySQL 5.6, InnoDB, cloud, etc, etc. Take a look at the full Content Catalog to see more. If you are attending, I hope to see you at one of the Cluster sessions...and remember, registration is still open

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • Can't run Eclipse after installing ADT Plugin

    - by user89439
    So, I've installed the ADT Plugin, run a HelloWorld, restart my computer and after that the Eclipse can't run. A message appear: "An error has ocurred. See the log file: /home/todi (...)" Here is the log file: !SESSION 2011-07-26 22:51:59.381 ----------------------------------------------- eclipse.buildId=I20110613-1736 java.version=1.6.0_26 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=pt_BR Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.135 !MESSAGE Could not rename configuration temp file !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.157 !MESSAGE Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" !STACK 0 java.io.IOException: Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:690) at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:574) at org.eclipse.update.internal.configurator.PlatformConfiguration.startup(PlatformConfiguration.java:714) at org.eclipse.update.internal.configurator.ConfigurationActivator.getPlatformConfiguration(ConfigurationActivator.java:404) at org.eclipse.update.internal.configurator.ConfigurationActivator.initialize(ConfigurationActivator.java:136) at org.eclipse.update.internal.configurator.ConfigurationActivator.start(ConfigurationActivator.java:69) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:299) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:440) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:268) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:462) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:400) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:476) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:429) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:417) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:345) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1207) at org.eclipse.equinox.internal.ds.model.ServiceComponent.createInstance(ServiceComponent.java:480) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.createInstance(ServiceComponentProp.java:271) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:332) at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:588) at org.eclipse.equinox.internal.ds.ServiceReg.getService(ServiceReg.java:53) at org.eclipse.osgi.internal.serviceregistry.ServiceUse$1.run(ServiceUse.java:138) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.internal.serviceregistry.ServiceUse.getService(ServiceUse.java:136) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.getService(ServiceRegistrationImpl.java:468) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.getService(ServiceRegistry.java:467) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.getService(BundleContextImpl.java:594) at org.osgi.util.tracker.ServiceTracker.addingService(ServiceTracker.java:450) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:980) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:1) at org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:262) at org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:185) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:348) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:283) at org.eclipse.core.internal.runtime.InternalPlatform.getBundleGroupProviders(InternalPlatform.java:225) at org.eclipse.core.runtime.Platform.getBundleGroupProviders(Platform.java:1261) at org.eclipse.ui.internal.ide.IDEWorkbenchPlugin.getFeatureInfos(IDEWorkbenchPlugin.java:291) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeFeatureDependentActions(WorkbenchActionBuilder.java:1217) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeActions(WorkbenchActionBuilder.java:1026) at org.eclipse.ui.application.ActionBarAdvisor.fillActionBars(ActionBarAdvisor.java:147) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.fillActionBars(WorkbenchActionBuilder.java:341) at org.eclipse.ui.internal.WorkbenchWindow.fillActionBars(WorkbenchWindow.java:3564) at org.eclipse.ui.internal.WorkbenchWindow.(WorkbenchWindow.java:419) at org.eclipse.ui.internal.tweaklets.Workbench3xImplementation.createWorkbenchWindow(Workbench3xImplementation.java:31) at org.eclipse.ui.internal.Workbench.newWorkbenchWindow(Workbench.java:1920) at org.eclipse.ui.internal.Workbench.access$14(Workbench.java:1918) at org.eclipse.ui.internal.Workbench$68.runWithException(Workbench.java:3658) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.application.WorkbenchAdvisor.openWindows(WorkbenchAdvisor.java:803) at org.eclipse.ui.internal.Workbench$33.runWithException(Workbench.java:1595) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2604) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2494) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:674) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:667) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:123) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:344) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:622) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:577) at org.eclipse.equinox.launcher.Main.run(Main.java:1410) !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.049 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.049 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.049 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.644 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.644 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.644 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.152 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.158 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.159 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.215 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.216 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.216 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:17.988 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:18.006 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:18.006 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:19.847 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:19.848 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:19.848 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. I don't understand how the path windows like has appeared... if anyone knows how to solve this, I'll appreciate! Thank you for all your answers! Best regards, Alexandre Ferreira.

    Read the article

  • Simplify your Ajax code by using jQuery Global Ajax Handlers and ajaxSetup low-level interface

    - by hajan
    Creating web applications with consistent layout and user interface is very important for your users. In several ASP.NET projects I’ve completed lately, I’ve been using a lot jQuery and jQuery Ajax to achieve rich user experience and seamless interaction between the client and the server. In almost all of them, I took advantage of the nice jQuery global ajax handlers and jQuery ajax functions. Let’s say you build web application which mainly interacts using Ajax post and get to accomplish various operations. As you may already know, you can easily perform Ajax operations using jQuery Ajax low-level method or jQuery $.get, $.post, etc. Simple get example: $.get("/Home/GetData", function (d) { alert(d); }); As you can see, this is the simplest possible way to make Ajax call. What it does in behind is constructing low-level Ajax call by specifying all necessary information for the request, filling with default information set for the required properties such as data type, content type, etc... If you want to have some more control over what is happening with your Ajax Request, you can easily take advantage of the global ajax handlers. In order to register global ajax handlers, jQuery API provides you set of global Ajax methods. You can find all the methods in the following link http://api.jquery.com/category/ajax/global-ajax-event-handlers/, and these are: ajaxComplete ajaxError ajaxSend ajaxStart ajaxStop ajaxSuccess And the low-level ajax interfaces http://api.jquery.com/category/ajax/low-level-interface/: ajax ajaxPrefilter ajaxSetup For global settings, I usually use ajaxSetup combining it with the ajax event handlers. $.ajaxSetup is very good to help you set default values that you will use in all of your future Ajax Requests, so that you won’t need to repeat the same properties all the time unless you want to override the default settings. Mainly, I am using global ajaxSetup function similarly to the following way: $.ajaxSetup({ cache: false, error: function (x, e) { if (x.status == 550) alert("550 Error Message"); else if (x.status == "403") alert("403. Not Authorized"); else if (x.status == "500") alert("500. Internal Server Error"); else alert("Error..."); }, success: function (x) { //do something global on success... } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you can make ajax call using low-level $.ajax interface and you don’t need to worry about specifying any of the properties we’ve set in the $.ajaxSetup function. So, you can create your own ways to handle various situations when your Ajax requests are occurring. Sometimes, some of your Ajax Requests may take much longer than expected… So, in order to make user friendly UI that will show some progress bar or animated image that something is happening in behind, you can combine ajaxStart and ajaxStop methods to do the same. First of all, add one <div id=”loading” style=”display:none;”> <img src="@Url.Content("~/Content/images/ajax-loader.gif")" alt="Ajax Loader" /></div> anywhere on your Master Layout / Master page (you can download nice ajax loading images from http://ajaxload.info/). Then, add the following two handlers: $(document).ajaxStart(function () { $("#loading").attr("style", "position:absolute; z-index: 1000; top: 0px; "+ "left:0px; text-align: center; display:none; background-color: #ddd; "+ "height: 100%; width: 100%; /* These three lines are for transparency "+ "in all browsers. */-ms-filter:\"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)\";"+ " filter: alpha(opacity=50); opacity:.5;"); $("#loading img").attr("style", "position:relative; top:40%; z-index:5;"); $("#loading").show(); }); $(document).ajaxStop(function () { $("#loading").removeAttr("style"); $("#loading img").removeAttr("style"); $("#loading").hide(); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note: While you can reorganize the style in a more reusable way, since these are global Ajax Start/Stop, it is very possible that you won’t use the same style in other places. With this way, you will see that now for any ajax request in your web site or application, you will have the loading image appearing providing better user experience. What I’ve shown is several useful examples on how to simplify your Ajax code by using Global Ajax Handlers and the low-level AjaxSetup function. Of course, you can do a lot more with the other methods as well. Hope this was helpful. Regards, Hajan

    Read the article

  • Internationalize WebCenter Portal - Content Presenter

    - by Stefan Krantz
    Lately we have been involved in engagements where internationalization has been holding the project back from success. In this post we are going to explain how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session. As you probably know by now WebCenter Portal leverages the Localization support from Java Server Faces (JSF), in this post we will assume that the localization is controlled and enforced by switching the current browsers locale between English and Spanish. There is two main scenarios in internationalization of a content enabled pages, since Content Presenter offers both presentation of information as well as contribution of information, in this post we will look at how to enable seamless integration of correct localized version of the back end content file and how to enable the editor/author to edit the correct localized version of the file based on the current browser locale. Solution Scenario 1 - Localization aware content presentation Due to the amount of steps required to implement the enclosed solution proposal I have decided to share the solution with you in group components for each facet of the solution. If you want to get more details on each step, you can review the enclosed components. This post will guide you through the steps of enabling each component and what it enables/changes in each section of the system. Enable Content Presenter Customization By leveraging a predictable naming convention of the data files used to hold the content for the Content Presenter instance we can easily develop a component that will dynamically switch the name out before presenting the information. The naming convention we have leverage is the industry best practice by having a shared identifier as prefix (ContentABC) and a language enabled suffix (_EN) (_ES). So the assumption is that each file pair in above example should look like following:- English version - (ContentABC_EN)- Spanish version - (ContentABC_ES) Based on above theory we can now easily regardless of the primary version assigned to the content presenter instance switch the language out by using the localization support from JSF. Below java bean (oracle.webcenter.doclib.internal.view.presenter.NLSHelperBean) is enclosed in the customization project available for download at the bottom of the post: 1: public static final String CP_D_DOCNAME_FORMAT = "%s_%s"; 2: public static final int CP_UNIQUE_ID_INDEX = 0; 3: private ContentPresenter presenter = null; 4:   5:   6: public NLSHelperBean() { 7: super(); 8: } 9:   10: /** 11: * This method updates the configuration for the pageFlowScope to have the correct datafile 12: * for the current Locale 13: */ 14: public void initLocaleForDataFile() { 15: String dataFile = null; 16: // Checking that state of presenter is present, also make sure the item is eligible for localization by locating the "_" in the name 17: if(presenter.getConfiguration().getDatasource() != null && 18: presenter.getConfiguration().getDatasource().isNodeDatasource() && 19: presenter.getConfiguration().getDatasource().getNodeIdDatasource() != null && 20: !presenter.getConfiguration().getDatasource().getNodeIdDatasource().equals("") && 21: presenter.getConfiguration().getDatasource().getNodeIdDatasource().indexOf("_") > 0) { 22: dataFile = presenter.getConfiguration().getDatasource().getNodeIdDatasource(); 23: FacesContext fc = FacesContext.getCurrentInstance(); 24: //Leveraging the current faces contenxt to get current localization language 25: String currentLocale = fc.getViewRoot().getLocale().getLanguage().toUpperCase(); 26: String newDataFile = dataFile; 27: String [] uniqueIdArr = dataFile.split("_"); 28: if(uniqueIdArr.length > 0) { 29: newDataFile = String.format(CP_D_DOCNAME_FORMAT, uniqueIdArr[CP_UNIQUE_ID_INDEX], currentLocale); 30: } 31: //Replacing the current Node datasource with localized datafile. 32: presenter.getConfiguration().getDatasource().setNodeIdDatasource(newDataFile); 33: } 34: } With this bean code available to our WebCenter Portal implementation we can start the next step, by overriding the standard behavior in content presenter by applying a MDS Taskflow customization to the content presenter taskflow, following taskflow customization has been applied to the customization project attached to this post:- Library: WebCenter Document Library Service View- Path: oracle.webcenter.doclib.view.jsf.taskflows.presenter- File: contentPresenter.xml Changes made in above customization view:1. A new method invocation activity has been added (initLocaleForDataFile)2. The method invocation invokes the new NLSHelperBean3. The default activity is moved to the new Method invocation (initLocaleForDataFile)4. The outcome from the method invocation goes to determine-navigation (original default activity) The above changes concludes the presentation modification to support a compatible localization scenario for a content driven page. In addition this customization do not limit or disables the out of the box capabilities of WebCenter Portal. Steps to enable above customization Start JDeveloper and open your WebCenter Portal Application Select "Open Project" and include the extracted project you downloaded (CPNLSCustomizations.zip) Make sure the build out put from CPNLSCustomizations project is a dependency to your Portal project Deploy your Portal Application to your WC_CustomPortal managed server Make sure your naming convention of the two data files follow above recommendation Example result of the solution: Solution Scenario 2 - Localization aware content creation and authoring As you could see from Solution Scenario 1 we require the naming convention to be strictly followed, this means in the hands of a user with limited technology knowledge this can be one of the failing links in this solutions. Therefore I strongly recommend that you also follow this part since this will eliminate this risk and also increase the editors/authors usability with a magnitude. The current WebCenter Portal Architecture leverages WebCenter Content today to maintain, publish and manage content, therefore we need to make few efforts in making sure this part of the architecture is on board with our new naming practice and also simplifies the creation of content for our end users. As you probably remember the naming convention required a prefix to be common so I propose we enable a new component that help you auto name the content items dDocName (this means that the readable title can still be in a human readable format). The new component (WCP-LocalizationSupport.zip) built for this scenario will enable a couple of things: 1. A new service where a sequential number can be generate on request - service name: GET_WCP_LOCALE_CONTENTID 2. The content presenter is leveraging a specific function when launching the content creation wizard from within Content Presenter. Assumption is that users will create the content by clicking "Create Web Content" button. When clicking the button the wizard opened is actually running in side of WebCenter Content server, file executed (contentwizard.hcsp). This file uses JSON commands that will generate operations in the content server, I have extend this file to create two identical data files instead of one.- First it creates the English version by leveraging the new Service and a Global Rule to set the dDocName on the original check in screen, this global rule is available in a configuration package attached to this blog (NLSContentProfileRule.zip)- Secondly we run a set of JSON javascripts to create the Spanish version with the same details except for the name where we replace the suffix with (_ES)- Then content creation wizard ends with its out of the box behavior and assigns the Content Presenter instance the English versionSee Javascript markup below - this can be changed in the (WCP-LocalizationSupport.zip/component/WCP-LocalizationSupport/publish/webcenter) 1: //---------------------------------------A-TEAM--------------------------------------- 2: WCM.ContentWizard.CheckinContentPage.OnCheckinComplete = function(returnParams) 3: { 4: var callback = WCM.ContentWizard.CheckinContentPage.checkinCompleteCallback; 5: WCM.ContentWizard.ChooseContentPage.OnSelectionComplete(returnParams, callback); 6: // Load latest DOC_INFO_SIMPLE 7: var cgiPath = DOCLIB.config.httpCgiPath; 8: var jsonBinder = new WCM.Idc.JSONBinder(); 9: jsonBinder.SetLocalDataValue('IdcService', 'DOC_INFO_SIMPLE'); 10: jsonBinder.SetLocalDataValue('dID', returnParams.dID); 11: jsonBinder.Send(cgiPath, $CB(this, function(http) { 12: var ret = http.GetResponseText(); 13: var binder = new WCM.Idc.JSONBinder(ret); 14: var dDocName = binder.GetResultSetValue('DOC_INFO', 'dDocName', 0); 15: if(dDocName.indexOf("_") > 0){ 16: var ssBinder = new WCM.Idc.JSONBinder(); 17: ssBinder.SetLocalDataValue('IdcService', 'SS_CHECKIN_NEW'); 18: //Additional Localization dDocName generated 19: ssBinder.SetLocalDataValue('dDocName', getLocalizedDocName(dDocName, "es")); 20: ssBinder.SetLocalDataValue('primaryFile', 'default.xml'); 21: ssBinder.SetLocalDataValue('ssDefaultDocumentToken', 'SSContributorDataFile'); 22:   23: for(var n = 0 ; n < binder.GetResultSetFields('DOC_INFO').length ; n++) { 24: var field = binder.GetResultSetFields('DOC_INFO')[n]; 25: if(field != 'dID' && 26: field != 'dDocName' && 27: field != 'dID' && 28: field != 'dReleaseState' && 29: field != 'dRevClassID' && 30: field != 'dRevisionID' && 31: field != 'dRevLabel') { 32: ssBinder.SetLocalDataValue(field, binder.GetResultSetValue('DOC_INFO', field, 0)); 33: } 34: } 35: ssBinder.Send(cgiPath, $CB(this, function(http) {})); 36: } 37: })); 38: } 39:   40: //Support function to create localized dDocNames 41: function getLocalizedDocName(dDocName, lang) { 42: var result = dDocName.replace("_EN", ("_" + lang)); 43: return result; 44: } 45: //---------------------------------------A-TEAM--------------------------------------- 3. By applying the enclosed NLSContentProfileRule.zip, the check in screen for DataFile creation will have auto naming enabled with localization suffix (default is English)You can change the default language by updating the GlobalNlsRule and assign preferred prefix.See Rule markup for dDocName field below: <$executeService("GET_WCP_LOCALE_CONTENTID")$><$dprDefaultValue=WCP_LOCALE.LocaleContentId & "_EN"$> Steps to enable above extensions and configurations Install WebCenter Component (WCP-LocalizationSupport.zip), via the AdminServer in WebCenter Content Administration menus Enable the component and restart the content server Apply the configuration bundle to enable the new Global Rule (GlobalNlsRule), via the WebCenter Content Administration/Config Migration Admin New Content Creation Experience Result Content EditingContent editing will by default be enabled for authoring in the current select locale since the content file is selected by (Solution Scenario 1), this means that a user can switch his browser locale and then get the editing experience adaptable to the current selected locale. NotesA-Team are planning to post a solution on how to inline switch the locale of the WebCenter Portal Session, so the Content Presenter, Navigation Model and other Face related features are localized accordingly. Content Presenter examples used in this post is an extension to following post:https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enable_content_editing_of_iterative Downloads CPNLSCustomizations.zip - WebCenter Portal, Content Presenter Customization https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/CPNLSCustomizations.zip WCP-LocalizationSupport.zip - WebCenter Content, Extension Component to enable localization creation of files with compliant auto naminghttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/WCP-LocalizationSupport.zip NLSContentProfileRule.zip - WebCenter Content, Configuration Update Bundle to enable Global rule for new check in naming of data fileshttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/NLSContentProfileRule.zip

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 1 of 2

    - by pinaldave
    This is first part of the two part series of Practices for Software Startup Pluralsight Course. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik, a leading vendor of developer and team productivity tools. Stephen is also a Certified Scrum Master, Certified Scrum Professional, PMP, and also speaks regularly at industry conferences around the world. He has written several books on application and database development.  Stephen is also a board member of the Scrum Alliance. Startups – Everybodies Dream Start-up companies are an important topic right now – everyone wants to start their own business.  It is also important to remember that all companies were a start up at one point – from your corner store to the giants like Microsoft and Apple.  Research proves that not every start-up succeeds, in fact, most will fail before their first year.  There are many reasons for this, and this could be due to the fact that there are many stages to a start-up company, and stumbling at any of these stages can lead to failure.  It is important to understand what makes a start-up company succeed at all its hurdles to become successful.  It is even important to define success.  For most start-ups this would mean becoming their own independently functioning company or to be bought out for a hefty profit by a larger company.  The idea of making a hefty profit by living your dream is extremely important, and you can even think of start-ups as the new craze.  That’s why studying them is so important – they are very popular, but things have changed a lot since their inception. Starting the Startups Beginning a start-up company used to be difficult, but now facilities and information is widely available, and it is much easier.  But that means it is much easier to fail, also.  Previously to start your own company, everything was planned and organized, resources were ensured and backed up before beginning; even the idea of starting your own business was a big thing.  Now anybody can do it, and the steps are simple and outlines everywhere – you can get online software and easily outsource , cloud source, or crowdsource a lot of your material.  But without the type of planning previously required, things can often go badly. New Products – New Ideas – New World There are so many fantastic new products, but they don’t reach success all the time.  I find start-up companies very interesting, and whenever I meet someone who is interested in the subject or already starting their own company, I always ask what they are doing, their plans, goals, market, etc.  I am sorry to say that in most cases, they cannot answer my questions.  It is true that many fantastic ideas fail because of bad decisions.  These bad decisions were not made intentionally, but people were simply unaware of what they should be doing.  This will always lead to failure.  But I am happy to say that all these issues can be gone because Pluralsight is now offering a course all about start-ups by Stephen Forte.  Stephen is a start up leader.  He has successfully started many companies and most are still going strong, or have gone on to even bigger and better things. Beginning Course on Startup I have always thought start-ups are a fascinating subject, and decided to take his course, but it is three hours long.  This would be hard to fit into my busy work day all at once, so I decided to do half of his course before my daughter wakes up, and the other half after she goes to sleep.  The course is divided into six modules, so this would be easy to do.  I began the first chapter early in the morning, at 5 am.  Stephen jumped right into the middle of the subject in the very first module – designing your business plan.  The first question you will have to answer to yourself, to others, and to investors is: What is your product and when will we be able to see it?  So a very important concept is a “minimal viable product.”  This means setting goals for yourself and your product.  We all have large dreams, but your minimal viable product doesn’t have to be your final vision at the very first.  For example: Apple is a giant company, but it is still evolving.  Steve Jobs didn’t envision the iPhone 6 at the very beginning.  He had to start at the first iPhone and do his market research, and the idea evolved into the technology you see now.  So for yourself, you should decide a beginning and stop point.  Do your market research.  Determine who you want to reach, what audience you want for your product.  You can have a great idea that simply will not work in the market, do need, bottlenecks, lack of resources, or competition.  There is a lot of research that needs to be done before you even write a business plan, and Stephen covers it in the very first chapter. The Team – Unique Key to Success After jumping right into the subject in the very first module, I wondered what Stephen could have in store for me for the rest of the course.  Chapter number two is building a team.  Having a team is important regardless of what your startup is.  You can be a true visionary with endless ideas and energy, but one person can still not do everything.  It is important to decide from the very beginning if you will have cofounders, team leaders, and how many employees you’ll need.  Even more important, you’ll need to decide what kind of team you want – what personalities, skills, and type of energy you want each of your employees to bring.  Do you want to have an A+ team with a B- idea, or do you have a B- idea that needs an A+ team to sell it?  Stephen asks all the hard questions!  I was especially impressed by his insight on developing.  You have to decide if you need developers, how many, and what their skills should be. I found this insight extremely useful for everyday usage, not just for start-up companies.  I would apply this kind of information in management at any position.  An amazing team will build an amazing product – and that doesn’t matter if you’re a start-up company or a small team working for a much larger business. Customer Development – The Ultimate Obective Chapter three was about customer development. According to Stephen, there are four different steps to develop a customer base.  The first question to ask yourself is if you are envisioning a large customer base buying a few products each, or a small, dedicated base that buys a lot of your product – quantity vs. Quality.  He also discusses how to earn, retain, and get more customers.  He also says that each customer should be placed in a different role – some will be like investors, who regularly spend with you and invest their money in your business.  It is then your job to take that investment and turn it into a better product in the future.  You need to deal with their money properly – think of it is as theirs as investors, not yours as profit.  At the end of this module I felt that only Stephen could provide this kind of insight, and then he listed all the resources he took his information from.  I have never seen a group of people so passionate about their customers. It was indeed a long day for me. In tomorrow’s part 2 we will discuss rest of the three module and also will see a quick video of the Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Manage Your Movies in Boxee

    - by DigitalGeekery
    Boxee is a free cross platform HTPC application that plays media locally and via the Internet. Today we’ll take a look at how to manage your local movie collection in Boxee. Note: We are using the most recent version of Boxee running on Windows 7. Your experience on an earlier version or a Mac or Linux build may vary slightly. If you are using an earlier version of Boxee, we recommend you update to the current version (0.9.21.11487). The latest update features significant improvements in file and media identification. Naming your Movie Files Proper file naming is important for Boxee to correctly identify your movie files. Before you get started you may want to take some time to name your files properly. Boxee supports the following naming conventions: Lawrence of Arabia.avi Lawrence.of.Arabia.avi Lawrence of Arabia (1962).avi Lawrence.of.Arabia(1962).avi For multi-part movies, you can use .part or .cd to identify first and second parts of the movie. Gettysburg.part1.avi Gettysburg.part2.avi If you are unsure of the correct title of the movie, check with IMDB.com. Supported File Types Boxee supports the following video file types and codecs: AVI, MPEG, WMV, ASF, FLV, MKV, MOV, MP4, M4A, AAC, NUT, Ogg, OGM, RealMedia RAM/RM/RV/RA/RMVB, 3gp, VIVO, PVA, NUV, NSV, NSA, FLI, FLC, and DVR-MS (beta support) CDs, DVDs, VCD/SVCD MPEG-1, MPEG-2, MPEG-4 (SP and ASP, including DivX, XviD, 3ivx, DV, H.263), MPEG-4 AVC (aka H.264), HuffYUV, Indeo, MJPEG, RealVideo, QuickTime, Sorenson, WMV, Cinepak Adding Movie Files to Boxee Boxee will automatically scan your default media folders and add any movie files to My Movies. Boxee will attempt to identify the media and check sources on the web to get data like cover art and other metadata. You can add as many sources to Boxee as you like from your local hard drive, external hard drives or from your network. You will need to make sure you have access to shared folders on the networked computer hosting the media you want to share. You can browse for other folders to scan by selecting Scan Media Folders.   You can also add media files by selecting Settings from the Home screen… Then select Media… and then selecting Add Sources. Browse for your directory and select Add source. Next, you’ll need to select the media type and the type of scanning. You can also change the share name if you’d like. When finished, select Add. You should see a quick notification at the top of the screen that the source was added.   Select Scan source to have Boxee to begin scanning your media files and attempt to properly identify them. Your movies may not show up instantly in My Movies. It will take Boxee some time to fully scan your sources, especially if you have a large collection. Eventually you should see My Movies begin to populate with cover art and metadata.   You can see the progress and find unidentified files by clicking on the yellow arrow to the left, or navigating to the left with your keyboard or remote and selecting Manage Sources.   Here you can see how many files (if any) Boxee failed to identify. To see which titles are unresolved, select Unidentified Files.   Here you’ll find your unresolved files. Select one of the unidentified files to search for the proper movie information. Next, select the Indentify Video icon. Boxee will fill in the title of the file or you edit the title yourself in the text box. Click Search. The results of your search will be displayed. Scroll through and select the title that fits your movie. Check the details of the film to make sure you have the correct title and select Done.   Fixing Incorrectly Indentified Files If you find a movie has been incorrectly identified you can correct it manually. Select the movie. Then search for the correct movie title from the list and select it. When you’re sure you found the correct movie, click Done. Filtering your Movies You can filter your movie collection by genre, or by whether it has been marked as watched or unwatched. When you’ve finished watching a movie, Boxee will mark it as watched.   You can also manually mark a title as watched.   Boxee also features a wide variety of genres by which you can filter the titles in your library. Playing your Movie When you’re ready to start watching a movie, simply select your title.   From here, you can select the “i” icon to read more information about the movie, add it to your queue, or add a shortcut. Click Local File to begin playing.   Now you’re ready to enjoy your movie. If you don’t have a large movie collection or just need more selection, you may want to check out the Netflix App for Boxee. Looking for a Boxee remote? Check out the iPhone App for Boxee. Links Download Boxee IMDB.com Similar Articles Productive Geek Tips Watch Netflix Instant Movies in BoxeeIntegrate Boxee with Media Center in Windows 7Customize the Background in BoxeeUse your iPhone or iPod Touch as a Boxee RemoteGetting Started with Boxee TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Day 2 - Game Design Documentation

    - by dapostolov
    So yesterday I didn't cut any code for my game but I was able to do a tiny bit of research on the XNA Game Development Technology and the communities out there and do you know what? I feel I'm a bit closer to my goal. The bad news is today I didn't cut code either. However, not all is lost because I wanted to get my ideas on paper and today I just did that.  Today, I began to jot down notes about the game and how I felt the visual elements would interact with each other. Unlike my workplace, my personal level of documentation is nothing more than a task list or a mind map of my ideas; it helps me streamline my solutions quiet effectively and circumvent the long process of articulating each thought to the n-th degree. I truly dislike documentation (because I have an extremely hard time articulating my thought and solutions); however, because I tend to do a really good job with documentation I tend to get stuck writing the buggers. But as a generalist remark: 'No Developer likes documentation.' For now let's stick with my basic notes and call this post a living document. Here are my notes, fresh, from after watching the new first episode of Merlin second season! Actually, a quick recommendation to anyone who is reading this (if anyone is): I truly recommend you envelope yourself in the medium or task you're trying to tackle. Be one with moment and feel it! For instance: Are you writing a fantasy script / game? What would the music of the genre sound like? For me the Conan the Barbarian soundtrack by Basil Poledouris is frackin awesome. There are many other good CD's out there, which I listen to (some who even use medival instruments, but Conan I keep returning to. It's a creative trigger for me. Ask yourself what would the imagery look like? Time to surf google for artist renditions of fantasy! What would the game feel like? Start playing some of your favorite games that inspire you, be wary though, have some self control and don't let it absorb your time. Anyhow, onto the documentation... Screens, Scenes, and Sprites. Oh My! (groan...) The first thing that came to mind were the screens, I thought the following would suffice: Menu Screen Character Customisation Screen Loading Screen? Battle Ground The Menu Screen Ok. So, the thought here is when the game loads a huge title is displayed: Wizard Wars. The player is prompted with 3 menu items: 1 Player Game, 2 Player Game, and Exit. Since I'm targetting the PC platform, as a non-networked game to start, I picture myself running my mouse over each menu option and the visual element of the menu item changes, along with a sound to indicate that I am over a curent menu item. And as I move my mouse away, it changes back, and possibly an exit mouse sound. Maybe on the screen somewhere is a brazier alit with a magical tome open right beside it, OR, maybe the tome is the menu! I hear the menu music as mellow, not obtrusive or piercing. On a menu item select, a confirmation sound bellows to indicate the players selection. The Esc key will always return me to the previous screens or desktop. The menu screen must feel...dark, like a really important ritual is about to happen and thus the music should build up. 1 Player Game - > Customize Character(s) 2 Player Game - > Customize Character(s) Exit - > Back to Windows Notes: So the first thing I pick up here are a couple things: First and foremost, my artistic abilities suck crap, so I may have to hire an artist (now that i've said that, lets get techy) graphical objects will be positioned within a scene on each screen / window. Menu items will be represented grapically, possibly animated, and have sound / animation effects triggered by user input or a time line. I have an animated scene involving a brazier or fire on a stick IF I was to move this game to the xbox, I'd have to track which menu item is currently selected (unless I do a mouse pointer type thing.) WindowObject has a scene A Scene has many GameObjects GameObject has a position graphic or animation MenuObject is a GameObject which has a mouse in, mouse out, and click event which either does something graphically (animation), does something with sound, or moves to another screen.  Character Customisation Screen With either the 1 or 2 player option selected, both selections will come to this screen; a wizard requires a name, powers, and vestements of course! Player one will configure his character first and then player two. I considered a split screen for PC but to have two people fighting over a keyboard would probably suck. For XBox, a split screen could work; maybe when I get into the networking portion (phase 2 blog?) of this game I will remove the 2 player option for PC and provide only multiplayer and I will leave 2 player for xbox...hmm... Anyhow...I picture the creation process as follows: Name: (textbox / keyboard entry) - for xbox, this would have to be different. Robe Color: (color box, or something) Stats: Speed, Oomph, and Health. (as sliders) 1 as minimum and 10 as maximum. Ok, Back, and Cancel buttons / options. Each stat has a benefit which are listed below. The idea is the player decides if he wants his wizard to run fast, be a tank and ... hit with a purse.Regardless, the player will have a pool of 12 points to use. Ideally, A balanced wizard will have 5 in each attribute. Spells? The only spell of choice is a ball of fire which comes without question. The music and screen should still feel like a ritual. The Character Speed Basically, how fast your character moves and casts. Oomph (Best Monster Truck Voice): PURE POWAH!!! The damage output of your fireball. Health How much damage you can take. Notes: I realise the game dynamics may sound uninteresting at the moment; but I think after a couple releases, we could have some other grand ideas such as: saved profiles, gold to upgrade arsenal of spells, talents, etc...but for now...a vanilla fireball thrower mage will suffice for this experiment. OK. So... a MenuObject  may need to be loosely coupled to allow future items such as networking? may be a button? a CharacterObject has a name speed oomph health and a funky robe color. cap on the three stats (1-10) an arsenal of 1 spell (possibly could expand this) The Loading Screen As is. The Battleground Screen For now, I'm keeping the screen as max resolution for the PC. The screen isn't going to move or even be a split screen. I'm not aiming high here because I want to see what level of change is involved when new features / concepts are added to game content. I'm interested to find out if we could apply techniques such as MVC or MVVM to this type of development or is it too tightly coupled? This reminds me when when my best friend and I were brainstorming our game idea (this is going back a while...1994, 6?) and he cringed at the thought of bringing business technology into games, especially when I suggested a database to store character information and COM / DCOM as the medium, but it seems I wasn't far off (reflecting); just like his implementation of a xml "config file" for dynamic direct-x menus back before .net in 1999...anyhow...i digress... The Battle One screen, two characters lobing balls of fire at each other...It doesn't get better than that. Every so often a scroll appears...and the fireballs bounce off walls, or the wizard has rapid fire, or even scrolls of healing! The scroll options are endless. Two bars at the top, each the color of the wizard (with their name beside the bar) indicate how much health they have. Possibly the appearance of the scrolls means the battle is taking too long? I'm thinking 1 player controls: up, down, left, right and space to fire the button. Or even possibly, mouse click and shift - mouse button to fire a spell in the direction they are facing. Two player controls: a, s, d, f and space AND arrows (up, down, left, right) and Del key or Crtl. The game ends when a player has 0 health and a dialog box appears asking for a rematch / reconfigure / exit. Health goes down when a fireball (friendly or not), connects with a wizard. When a wizard connects with a scroll, a countdown clock / icon appears near the health bar and the wizard begins to glow. For the most part, a wizard can have only scroll 1 effect on him at a time. Notes: Ok, there's alot to cover here. a CharacterObject is a GameObject it travels at a set velocity it travels in a direction it has sounds (walking, running, casting, impact, dying, laughing, whistling, other?) it has animations (walking, running, casting, impact, dying, laughing, idle, other?) it has a lifespan (determined by health) it is alive or dead it has a position a ScrollObject is a GameObject it carries a transferance of points "damage" (or healing, bad scroll effect?) (determinde by caster) it carries a transferance of "other" it is stationary it has a sound on impact it has a stationary animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by game) it is alive or dead it has a position a WallObject is a GameObject it has a sound on fireball impact? it is a still image / stationary it has an impact animation / or transfers an impact animation it is dead it has a position A FireBall is a GameObject it carries a transferance of poinst "damage" (or healing, bad scroll effect?) (determinde by caster) it travels at a set velocity it travels in a direction it has a sound it has a travel animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by caster) it is alive or dead it has a position As I look at this, I can see some common attributes in each object that I can carry up to the GameObject. I think I'm going to end the documentation here, it's taken me a bit of time to type this all out, tomorrow. I'll load up my IDE and my paint studio to get some good old fashioned cowboy hacking going!   D.

    Read the article

  • SQL Azure Reporting Limited CTP Arrived

    - by Shaun
    It’s about 3 months later when I registered the SQL Azure Reporting CTP on the Microsoft Connect after TechED 2010 China. Today when I checked my mailbox I found that the SQL Azure team had just accepted my request and sent the activation code over to me. So let’s have a look on the new SQL Azure Reporting.   Concept The SQL Azure Reporting provides cloud-based reporting as a service, built on SQL Server Reporting Services and SQL Azure technologies. Cloud-based reporting solutions such as SQL Azure Reporting provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead for report servers; and secure access, viewing, and management of reports. By using the SQL Azure Reporting service, we can do: Embed the Visual Studio Report Viewer ADO.NET Ajax control or Windows Form control to view the reports deployed on SQL Azure Reporting Service in our web or desktop application. Leverage the SQL Azure Reporting SOAP API to manage and retrieve the report content from any kinds of application. Use the SQL Azure Reporting Service Portal to navigate and view the reports deployed on the cloud. Since the SQL Azure Reporting was built based on the SQL Server 2008 R2 Reporting Service, we can use any tools we are familiar with, such as the SQL Server Integration Studio, Visual Studio Report Viewer. The SQL Azure Reporting Service runs as a remote SQL Server Reporting Service just on the cloud rather than on a server besides us.   Establish a New SQL Azure Reporting Let’s move to the windows azure deveploer portal and click the Reporting item from the left side navigation bar. If you don’t have the activation code you can click the Sign Up button to send a requirement to the Microsoft Connect. Since I already recieved the received code mail I clicked the Provision button. Then after agree the terms of the service I will select the subscription for where my SQL Azure Reporting CTP should be provisioned. In this case I selected my free Windows Azure Pass subscription. Then the final step, paste the activation code and enter the password of our SQL Azure Reporting Service. The user name of the SQL Azure Reporting will be generated by SQL Azure automatically. After a while the new SQL Azure Reporting Server will be shown on our developer portal. The Reporting Service URL and the user name will be shown as well. We can reset the password from the toolbar button.   Deploy Report to SQL Azure Reporting If you are familiar with SQL Server Reporting Service you will find this part will be very similar with what you know and what you did before. Firstly we open the SQL Server Business Intelligence Development Studio and create a new Report Server Project. Then we will create a shared data source where the report data will be retrieved from. This data source can be SQL Azure but we can use local SQL Server or other database if it opens the port up. In this case we use a SQL Azure database located in the same data center of our reporting service. In the Credentials tab page we entered the user name and password to this SQL Azure database. The SQL Azure Reporting CTP only available at the North US Data Center now so that the related SQL Server and hosted service might be better to select the same data center to avoid the external data transfer fee. Then we create a very simple report, just retrieve all records from a table named Members and have a table in the report to list them. In the data source selection step we choose the shared data source we created before, then enter the T-SQL to select all records from the Member table, then put all fields into the table columns. The report will be like this as following In order to deploy the report onto the SQL Azure Reporting Service we need to update the project property. Right click the project node from the solution explorer and select the property item. In the Target Server URL item we will specify the reporting server URL of our SQL Azure Reporting. We can go back to the developer portal and select the reporting node from the left side, then copy the Web Service URL and paste here. But notice that we need to append “/reportserver” after pasted. Then just click the Deploy menu item in the context menu of the project, the Visual Studio will compile the report and then upload to the reporting service accordingly. In this step we will be prompted to input the user name and password of our SQL Azure Reporting Service. We can get the user name from the developer portal, just next to the Web Service URL in the SQL Azure Reporting page. And the password is the one we specified when created the reporting service. After about one minute the report will be deployed succeed.   View the Report in Browser SQL Azure Reporting allows us to view the reports which deployed on the cloud from a standard browser. We copied the Web Service URL from the reporting service main page and appended “/reportserver” in HTTPS protocol then we will have the SQL Azure Reporting Service login page. After entered the user name and password of the SQL Azure Reporting Service we can see the directories and reports listed. Click the report will launch the Report Viewer to render the report.   View Report in a Web Role with the Report Viewer The ASP.NET and Windows Form Report Viewer works well with the SQL Azure Reporting Service as well. We can create a ASP.NET Web Role and added the Report Viewer control in the default page. What we need to change to the report viewer are Change the Processing Mode to Remote. Specify the Report Server URL under the Server Remote category to the URL of the SQL Azure Reporting Web Service URL with “/reportserver” appended. Specify the Report Path to the report which we want to display. The report name should NOT include the extension name. For example my report was in the SqlAzureReportingTest project and named MemberList.rdl then the report path should be /SqlAzureReportingTest/MemberList. And the next one is to specify the SQL Azure Reporting Credentials. We can use the following class to wrap the report server credential. 1: private class ReportServerCredentials : IReportServerCredentials 2: { 3: private string _userName; 4: private string _password; 5: private string _domain; 6:  7: public ReportServerCredentials(string userName, string password, string domain) 8: { 9: _userName = userName; 10: _password = password; 11: _domain = domain; 12: } 13:  14: public WindowsIdentity ImpersonationUser 15: { 16: get 17: { 18: return null; 19: } 20: } 21:  22: public ICredentials NetworkCredentials 23: { 24: get 25: { 26: return null; 27: } 28: } 29:  30: public bool GetFormsCredentials(out Cookie authCookie, out string user, out string password, out string authority) 31: { 32: authCookie = null; 33: user = _userName; 34: password = _password; 35: authority = _domain; 36: return true; 37: } 38: } And then in the Page_Load method, pass it to the report viewer. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: ReportViewer1.ServerReport.ReportServerCredentials = new ReportServerCredentials( 4: "<user name>", 5: "<password>", 6: "<sql azure reporting web service url>"); 7: } Finally deploy it to Windows Azure and enjoy the report.   Summary In this post I introduced the SQL Azure Reporting CTP which had just available. Likes other features in Windows Azure, the SQL Azure Reporting is very similar with the SQL Server Reporting. As you can see in this post we can use the existing and familiar tools to build and deploy the reports and display them on a website. But the SQL Azure Reporting is just in the CTP stage which means It is free. There’s no support for it. Only available at the North US Data Center. You can get more information about the SQL Azure Reporting CTP from the links following SQL Azure Reporting Limited CTP at MSDN SQL Azure Reporting Samples at TechNet Wiki You can download the solutions and the projects used in this post here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • top tweets WebLogic Partner Community – November 2011

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity glassfish GlassFish Marek’s JAX-RS 2.0 content from Devoxx 2011 – bit.ly/sp2NJO chriscmuir chriscmuir New blog post: ADF bug: missing af:column borders in af:table for IE7 – t.co/81np2jug chriscmuir chriscmuir Reading: Oracle’s ADF Rich Client User Interface (RCUI) Guidelines – oracle.com/webfolder/ux/m… netbeans NetBeans Team Bottlenecks be gone! #Java Performance Tuning workshop in Munich w Kirk Pepperdine, Nov 29-Dec 2: ow.ly/7Akh5 OracleBlogs OracleBlogs Creating ADF Faces Comamnd Button at Runtime ow.ly/1fM9dE alexismp Alexis MP blogged "GlassFish Back from Devoxx 2011, Mature Java EE 6 and EE 7 well on its way" – bit.ly/rP8LV0 JDeveloper JDeveloper & ADF Usage of jQuery in ADF dlvr.it/x3t84 20 hours ago Favorite Retweet Reply OTNArchBeat OTNArchBeat Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive – Dec 1 – 11am PT / 2pm ET bit.ly/t61W4G oraclepartners ORCL PartnerNetwork Brand new Oracle WebLogic 12c will launch on December 1, 10AM PT with a global Webcast highlighting salient… t.co/aflQQ3IX OracleBlogs OracleBlogs JDeveloper and ADF at UKOUG t.co/2CQTiB9n fnimphiu Frank Nimphius Attending UKOUG? All ADF sessions at a glance: t.co/TcMNTMXp 21 Nov Favorite Retweet Reply JDeveloper JDeveloper & ADF Free Webinar ‘ADF Task Flows for Beginners’, information and registration t.co/66jXnGgo via javafx4you javafx4you Java Developer Workshop #2 – Dec 1, 2011 @ Oracle Aoyama center in Tokyo t.co/8p9q3W2B AMIS_Services AMIS Services #vacature #Oracle #ADF ontwikkelaars. bit.ly/AMISADF Gun jezelf een nieuwe uitdaging? Meer op: dld.bz/azZ5N OracleBlogs OracleBlogs Launch Invitation: Introducing Oracle WebLogic Server 12c t.co/bRxCKwAk fnimphiu Frank Nimphius The brand new WebLogic 12c will be released on December 1st 2011 !!! Register for online launch event t.co/pPScg4Xh glassfish GlassFish Announcing Oracle WebLogic 12c – t.co/qh8TdFEl AdamBien Adam Bien Sun Coding Conventions–The Only Standard (Stop Inventing): Code written according to the Sun Coding Conventions… t.co/qaUWp5Mz wlscommunity WebLogic Community Launch Invitation: Introducing Oracle WebLogic Server 12c wp.me/p1LMIb-4y andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Custom Exception Registration for ADF BC EO Attribute fb.me/1m6nXQD52 MNEMONIC01 Michel Schildmeijer Blog by Michel Schildmeijer: "Oracle WebLogic 12c has been announced" bit.ly/vk6WQL glassfish GlassFish Tab Sweep – Coherence, SBT for GlassFish, OSGi in question, Java EE plugins, … t.co/tVIL95lj OracleBlogs OracleBlogs JavaFX 2.0 at Devoxx 2011 ow.ly/1fJ5iT JDeveloper JDeveloper & ADF Experimenting with ADF BC Application Module Pool Tuning dlvr.it/wjLC1 OracleWebLogic Oracle WebLogic Brand New #WebLogic 12c Launch Event, Dec 1 10am PT. Hasan Rizvi, SVP Fusion Middleware. Developer session. bit.ly/weblogic12clau… JDeveloper JDeveloper & ADF PopUp and Esc/Cancel operations. ADF 11g dlvr.it/whrmC JDeveloper JDeveloper & ADF BPM Workspace: issue loading ADF task flows t.co/vk1gKPx5 OpenJDK OpenJDK Kelly O’Hair — OpenJDK B24 Available : t.co/1bFws6Nw JDeveloper JDeveloper & ADF Oracle ADF setting Task flow to use same page definition file of caller page t.co/9k6UIoYZ JDeveloper JDeveloper & ADF Master Detail Data presentation and CRUD Operations. Detail records in an Editable Popup. ADF 11g t.co/H8uudR0Y JDeveloper JDeveloper & ADF Entity Attribute Validation Rule (Business Rule) based on Master View Object Attribute Example ADF 11g t.co/1agxEQcZ oracletechnet Justin Kestelyn Webcast: Oracle WebLogic Server 12c Launch/Developer Deep-Dive (Dec. 1) t.co/OVBdGKzC JDeveloper JDeveloper & ADF How to render different node icons for different tree levels dlvr.it/wY2jL JDeveloper JDeveloper & ADF Query Component with ‘dynamic’ view criteria dlvr.it/wXlF1 JDeveloper JDeveloper & ADF How to play Flash .swf file in Oracle ADF application t.co/zaSONWAH Devoxx Devoxx Duke at the #Devoxx 2011 Noxx Party! pic.twitter.com/bVJWyu1Z brhubart Bob Rhubart Adam Leftik: JavaEE adoption continues to increase, reaching 40+ million downloads this year. #qconsf11 JDeveloper JDeveloper & ADF Free #ODTUG Seminar – #ADF Task Flows for Beginners – sign up today. www3.gotomeeting.com/register/13372… java Java New Project: OpenJFX j.mp/tI4k3s #javafx #openjdk #devoxx << JavaFX is open source! /via frankmunz Frank Munz WebLogic 12c launch event Dec 1st. t.co/jQKinBqN brhubart Bob Rhubart Spring to Java EE Migration | David Heffelfinger feedly.com/k/td8ccG odtug ODTUG Mark your calendars and register for our upcoming webinars: bit.ly/dWKG1C ADF Task Flows & Measuring Scalability & Performance w/TCP myfear Markus Eisele Anybody willing to take this question? Using #JavaMail with #Weblogic Server bit.ly/stJOET AMIS_Services AMIS Services 20-22 december #training #Oracle JHeadstart #11g, productief ontwikkelen met ADF. Schrijf je in op: amis.nl/trainingen/ora… AdamBien Adam Bien Stress Testing Java EE 6 Applications – Free Article In Free Java Magazine: In the November / December 2011 issu… bit.ly/vmzKkc java Java New Tech Article: Spring to #JavaEE Migration t.co/0EvdHNxb OracleBlogs OracleBlogs WebLogic Java record SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 t.co/Eu1b6ZE0 OracleBlogs OracleBlogs What Is JavaFX? ow.ly/1frb6I OTNArchBeat OTNArchBeat The openJDK Windows Binary Download | Adam Bien ow.ly/7fRiG wlscommunity WebLogic Community WebLogic – Java record – SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 glassfish GlassFish "youtube.com/java" blogs.oracle.com/theaquarium/en… OTNArchBeat OTNArchBeat Beta Testing Concludes: 1Z1-102 – "Oracle WebLogic Server 11g: System Administration I" (Oracle Certification) ow.ly/7fJCl wlscommunity WebLogic Community A deep dive in Oracle WebLogic! @ Contribute – November 29th, 2011 Kontich Belgium wp.me/p1LMIb-4u glassfish GlassFish Gartner’s Latest Enterprise Application Server Magic Quadrant – Oracle’s leadership t.co/aYDqipD8 OpenJDK OpenJDK Terrence Barr – Open sourcing of JavaFX: OpenJFX Project proposed – bit.ly/uKVnEl OpenJDK OpenJDK Maurizio Cimadamore – Testing overload resolution: bit.ly/vgXAbQ java Java Java User Groups Roundup, November 2011 : t.co/hea6vVnk /via @robilad << in German JavaSpotlight The Java Spotlight Java Spotlight Episode 54: Stuart Marks on the Coinification of JDK7 goo.gl/fb/3UXoM OTNArchBeat OTNArchBeat Article Series: Migrating Spring to Java EE 6 | Arun Gupta bit.ly/twUJtz glassfish GlassFish New Java EE 6 Hands-On lab, Devoxx-approved! bit.ly/vup5uE java Java Brian Goetz’s enthusiasm for Java is palpable! #devoxx interview adf_emg ADF EMG "ADF testing with a mock framework" – what is a mock framework? Visit the forum and see: groups.google.com/forum/#!topic/… java Java Taping a bunch of interviews today with Java experts at #devoxx. View on Parleys.com tomorrow. glassfish GlassFish New screencast to configure and run a cross-machine cluster using GlassFish 3.1.1 in < 7 mins faissalb.blogspot.com/2011/11/glassf… (via @bfaissal) glassfish GlassFish Oracle Contributor Agreements – New Home! bit.ly/tD2eLo OTNArchBeat OTNArchBeat Java Magazine – by and for the Java Community- inaugural issue bit.ly/tTv8UD OTNArchBeat OTNArchBeat The Heroes of Java: Michael Hüttermann | @MyFear bit.ly/rYYOFe javafx4you javafx4you Development with #JavaFX on #Linux j.mp/uOpe69 #not_for_the_faint_of_heart java Java Contribute Technical Questions for Java Experts at #devoxx bit.ly/up2cN0 netbeans NetBeans Team A simple REST service using #NetBeans 7, #Java Servlet, and #JAXB: t.co/pKkufsD8 AdamBien Adam Bien The most beautiful, and portable slide of the whole #jaxcon for "Die Hard Java EE 6"session checked-in: kenai.com/projects/javae… jaxlondon JAX London Mark Little’s (@nmcl) excellent keynote from #jaxlondon ‘Middleware Everywhere…’ is available in full – t.co/8vBmtDJ1 AdamBien Adam Bien Calculator sample from "Die Hard Java EE 6" #jaxcon session checked-in: t.co/0UqaULfg OTNArchBeat OTNArchBeat ADF Faces – a logic bomb in the order of bean instantiations | @ChrisCMuir bit.ly/vjqRaZ OracleBlogs OracleBlogs ODI 11g y JMS Queue de Weblogic ow.ly/1fzfQJ frankmunz Frank Munz Which WebLogic book do you recommend? Review of S. Alapati’s WebLogic 11g Administration Handbook. bit.ly/rP0RtW JDeveloper JDeveloper & ADF PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications dlvr.it/vNFgn OracleBlogs OracleBlogs 3 New ADF Insider Essential training videos published. ow.ly/1fz94q OracleBlogs OracleBlogs Weblogic Server 11gR1 PS2: Administration Essentials book and eBook t.co/ykzwIaqs OracleBlogs OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 wlscommunity WebLogic Community Oracle Weblogic Server 11gR1 PS2: Administration Essentials book and eBook andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Intern… andrejusb.blogspot.com/2011/11/stress… OracleBlogs OracleBlogs Frank Nimphius presenting a full day of Oracle ADF in Switzerland ow.ly/1fxU78 java Java #JavaEE and #GlassFish: #JavaOne11 Slides, Demos, Replays, Hands-on Labs t.co/tLM0ehrD OracleBlogs OracleBlogs weblogic.security.SecurityInitializationException: Authentication for user weblogic denied ow.ly/1fxmiu glassfish GlassFish The Last Migration – GlassFish Wiki : t.co/Dc5FT1SJ OTNArchBeat OTNArchBeat A Successful Year of @MiddlewareMagic t.co/amcGGTTk OracleWebLogic Oracle WebLogic Unbeatable Performance for your Cloud Applications with Exalogic, #OracleCoherence and #WebLogic. ow.ly/7lYKm OTNArchBeat OTNArchBeat Stress Testing Oracle ADF BC Applications – Passivation and Activation | @AndrejusB bit.ly/sASssL OTNArchBeat OTNArchBeat Review: "Oracle Weblogic Server 11gR1 PS2: Administration Essentials" by Michel Schildmeijer | @MyFear t.co/ll6ra0J9 OTNArchBeat OTNArchBeat GlassFish 3.1.2 themes and features | The Aquarium bit.ly/vVqr9r Andre_van_Dalen Andre van Dalen Masterclass: Advanced Oracle ADF 11g lnkd.in/M_45Pi AdamBien Adam Bien The "lunch" edition of RentACar is pushed into: kenai.com/projects/javae… #wjax AdamBien Adam Bien In munich, room munich at #wjax. Welcome to #javaee workshop. Gather your questions. 15 minutes to go lucasjellema Lucas Jellema Review by Markus of Michel’s book: t.co/41U9wvOb In short: valuable for novice WLS users, maybe not so much for die-hard WLS admin. biemond Edwin Biemond “@myfear: [blog] #Review: "#Weblogic Server 11gR1 PS2: Administration Essentials" t.co/LsODcb3e” got the same conclusion on amazon glassfish GlassFish Practical advice for deploying Lift apps to GlassFish: bit.ly/t3KUml glassfish GlassFish The unbearable lightness of GlassFish t.co/v9307SEJ javafx4you javafx4you Building Java EE applications in JavaFX: JavaFX 2.0, FXML and Spring j.mp/tiMDUh andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Passiv… andrejusb.blogspot.com/2011/11/stress… wlscommunity WebLogic Community “@AMIS_Services: Follow @amis_services To Win a copy of SOA Suite 11g Handbook by @lucasjellema dld.bz/axD22 pls RT” excellent book! glassfish GlassFish GlassFish 3.1.2 themes and features bit.ly/uEc6uZ biemond Edwin Biemond Weblogic pre-sales exam was hard, you really need to know the versions , upgrade path and have a score above 80% monkchips James Governor The Rise and Fall and Rise of Java. JAX 2011 london keynote. how big data and the web are floating the boat. slidesha.re/u3Kzlo glassfish GlassFish Tab Sweep – Jersey, Hudson, GlassFish Hosting, GC’s compared, Spring to JavaEE, Modularity, … bit.ly/u9Cc30 oracletechnet Justin Kestelyn Oracle Tuxedo: A renewed acquaintance t.co/gp0mmf20 OTNArchBeat OTNArchBeat Oracle Enterprise Pack for Eclipse, OEPE 11.1.1.8 bit.ly/tC3eKp OracleBlogs OracleBlogs NetBeans HTML Editor and Groovy Editor in a Multiview Component (Part 2) ow.ly/1ftCeI myfear Markus Eisele [blog] #Oracle 2008 – 2011 in Gartners Magic Quadrant for Enterprise Application Servers t.co/2Bs1vgMZ myfear Markus Eisele [blog] #EclipseCon Europe – Java 7 in the Enterprise goo.gl/fb/r80df #ece2011 #java7 javafx4you javafx4you JavaFX 2.0 for Mac build b07 (developer preview) is available for download j.mp/vSwmBP Enjoy! #JavaFX #Mac OracleBlogs OracleBlogs A deep dive in Oracle WebLogic! @ Contribute November 29th, 2011 Kontich Belgium ow.ly/1fsEZs arungupta Arun Gupta #JavaEE7 slides from #jaxlondon and #jfall11 now available: slidesha.re/sh4iFq AdamBien Adam Bien Just checked-in the results of the #jaxlondon community night (somehow beer related): kenai.com/projects/javae… glassfish GlassFish GlassFish Podcast Episode #080 – User Stories, Part 3: Adam Bien and Sean Comerford (ESPN) blogs.oracle.com/glassfishpodca… glassfish GlassFish Story: t.co/jQPqihJb using GlassFish blogs.oracle.com/stories/entry/… "3000+ requests/sec" and more enterprisejava Java EE Mentions New blog post WebLogic deployment status checks for CI wp.me/pOOSs-F #weblogic #continuousintegration /vi… bit.ly/uZz0fk The become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress

    Read the article

  • CodePlex Daily Summary for Monday, March 08, 2010

    CodePlex Daily Summary for Monday, March 08, 2010New Projects38fj4ncg2: 38fj4ncg2Ac#or: A actor framework written in Mono (C#) Make it easy to make multithreaded programs with the actor model.Aerial Phone Book: It's a ASP app that allow more of one user see a contacts on phone book and add new contacts. This way a group of users can maintain a common phon...AmiBroker Plug-Ins with C#: Plug-ins for AmiBroker built with Microsoft .NET Framework and C#.AxUnit: AxUnit is a Unit Testing framework for Microsoft Dynamics Ax (X++). It's an extension to the SysTest framework provided with DAX4.0 and newer versi...Botola PHP Class: Une class en PHP qui vous permet d'avoir les informations qui concernent les équipes de le championnat Marocain du football.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: Misc. stuff from Lars Wilhelmsen.Codename T: Codename T is in the very basic stages of development. It should be ready for beta testing by the start of April.ComBrowser: combrowserCompact Unity: The Compact Unity is a lightweight dependency injection container with support for constructor and property call injection written in .NET Compact ...FAST for Sharepoint MOSS 2010 Query Tool: Tool to query FAST for Sharepoint and Sharepoint 2010 Enterprise Search. It utilizes the search web services to run your queries so you can test y...Icarus Scene Engine: Icarus Scene Engine is a cross-platform 3D eLearning, games and simulation engine, integrating open source APIs into a cohesive cross-platform solu...jQuery.cssLess: jQuery plugin that interprets and loads LESS css files. (http://lesscss.org).Katara Dental Phase II: Second phase of Kdpl.Lunar Phase Silverlight Gadget: Meet the moon phase, percent of illumination and corresponding zodiac sign from your desktop. Reflection Studio: Reflection Studio is a development tool that encapsulate all my work around reflection, performance and WPF. It allows to inject performance traces...RSNetty: RSNetty is a RuneScape Private Server programmed in the Java programming language.Simple WMV/ASF files muxer/demuxer: Simple WMV files muxer/demuxer implemented in C#/C++. It has simple WPF-based UI and allows copy/replace operations on video, audio and script stre...sm: managerTFS Proxy Monitor: TFS Proxy Monitor. A winform application allow administrator can monitor the TFS Server Proxy statistics remotely.umbracoSamplePackageCreator (beta): This is an early version of a simple package creator for Umbraco as a Visual Studio project. Currently with an Xslt extension and a user control. O...WatchersNET.TagCloud: 3D Flash TagCloud Module for DotNetNukeWriterous: A Plug-in For Windows Live Writer: This plug-in for Live Writer allows the user to create their post in Live Writer and then publish to Posterous.comNew Releases.NET Extensions - Extension Methods Library: Release 2010.05: Added a common set of extension methods for IDataReader, DataRow and DataRowView to access field values in a type safe manner using type dedicated ...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.1: This is just a demo plug-in which shows how you can write plug-ins for AmiBroker with fully managed code.AxUnit: Version 1: AxUnit let's you write Unit Test assertions in Dynamics Ax like this: assert.that(2, is.equalTo2)); Installation instructions (Microsoft Dynamics ...BattLineSvc: V2: - Fixed bug where sometimes the line would not show up, even with the 90 second boot-up delay. This was due to the window being created too early ...Botola PHP Class: Botola API: la classe PHPBugTracker.NET: BugTracker.NET 3.4.0: In screen capture app, "Go to website" now goes to the bug you just created. In screen capture app, fixed where the crosshairs weren't always to...Bulk Project Delete: Version 1.1.1: A minor fix to 1.1: fixes a problem that indicated some projects were not found on the server when they were in fact found. This problem only exist...C# Linear Hash Table: Linear Hash Table b3: Remove functionality added. Now IDictionary Compliant, but most functions not yet tested.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: LarsW.MexEdmxFixer 1.0: A quick hack to fix the Edmx files output by mex.exe (a tool in the SQL Modeling suite - November 2009 CTP) so that they can be opened in the desig...Code Snippet With Syntaxhighlighter Support for Windows Live Writer: Version 5.0.2: Minor update. Added brushes for F#, PowerShell and Erlang. Now a Windows Presentation Framework (WPF) application. ComponentFactory.Krypton.Toolki...Compact Unity: Compact Unity 1.0: Release.Compact Unity: CompactUnity 1.0: Release.FAST for Sharepoint MOSS 2010 Query Tool: Version 0.9: The tool is fully functioning. All of the cases for exceptions may not have been caught yet. I wanted to release a version to allow people to use...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite RC (for .NET 4.0 RC): Build for .NET 4.0 RC. Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC. BEAWARE! Fluent for .NET 4.0 RC is...FluentNHibernate.Search: 0.2 Beta: 0.2 Beta Fixed : #7275 - Field Mapping without specifying "Name" Fixed : #7271 - StackOverFlow Exception while Configure Embedded Mappings Fixed :...InfoService: InfoService v1.5 Beta 9: InfoService Beta Release Please note this is a BETA. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Plug...jQuery.cssLess: jQuery.cssLess 0.2: Version supports variables, mixins and nested rules. TODO: lower scope variables and mixins should not delete higher scope variables and mixins ...Lunar Phase Silverlight Gadget: Lunar Phase: First public beta for Lunar Phase Silverlight Gadget. It's a stable release but it hasn't auto update state. That will come with the final release ...MapWindow GIS: MapWindow 6.0 msi (March 7): This is an update that fixes a number of problems with the multi-point features, the M and Z features as well as enabling multi-part creation using...Mews: Mews.Application V0.7: Installation InstuctionsNew Features15390 15085 Fixed Issues16173 16552. This happens when the database maintenance process kicks in during sta...sELedit: sELedit v1.0a: Added: Basic exception handlers (load/save/export) Added: List 57 support (no search and replace) Added: MYEN 1.3.1 Client ->CN 1.3.6 Server export...Sem.Sync: 2010-03-07 - End user client for Xing to Outlook: This client does include the binaries for syncing Xing contacts to Microsoft Outlook. It does contain only the binaries to sync from Xing to Outloo...Sem.Sync: 2010-03-07 - Synchronization Manager: This client does provide a more advanced (and more complex) GUI that allows you to select from two included templates (you can add your own, too) a...SharePoint Outlook Connector: Source Code for Version 1.2.3.2: Source Code for Version 1.2.3.2SharePoint Video Player Web Part & SharePoint Video Library: Version 2.0.0: Release Notes: New The new SharePoint Video Player release includes a SharePoint video template to create your own video library Changes The Shar...SilverSprite: SilverSprite 3.0 Alpha 2: These are the latest binaries for SilverSprite. The major changes for this release are that we are now using the XNA namespaces (no more #Iif SILVE...Simple WMV/ASF files muxer/demuxer: Initial release: Initial releaseStarter Master Pages for SharePoint 2010: Starter Master Pages for SP2010 - RC: Release Candidate release of Starter Master Pages for SharePoint 2010 by Randy Drisgill http://blog.drisgill.com _starter.master - Starter Master ...Text Designer Outline Text Library: 11th minor release: New Feature : Reflection!!ToolSuite.ValidationExpression: 01.00.01.002: second release of the validation class; the assembly file is ready to use, the documentation is complete;Truecrafting: Truecrafting 0.51: overhauled truecrafting code: combined all engines into 1 mage engine, made the engine and artificial intelligence support any spec, and achieved a...WatchersNET.TagCloud: WatchersNET.TagCloud 01.00.00: First ReleaseWCF Contrib: WCF Contrib v2.1 Mar07: This release is the final version of v2.1 Beta that was published on February 10th. Below you will find the changes that were made: Changes from v...WillStrohl.LightboxGallery Module for DotNetNuke: WillStrohl.LightboxGallery v1.02.00: This version of the Lightbox Gallery Module adds the following features: New Lightbox provider: Fancybox Thumbnails generated keeping their aspec...Writerous: A Plug-in For Windows Live Writer: Writerous v1.0: This is the first release of Writerous.WSDLGenerator: WSDLGenerator 0.0.0.5: - Use updated CommandLineParser.dll - Code uses 'ServiceDescriptionReflector' instead of custom code. - Added option to support SharePoint 2007 com...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.0.beta.bin: 原 DsJian1.0的升级版本,名字修改为 xpress 此正式版本YSCommander: Version 1.0.1.0: Fixed bug: 1st start with non-existing data file.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesImage Resizer Powertoy Clone for WindowsMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFluent AssertionsFasterflect - A Fast and Simple Reflection API

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A Nondeterministic Engine written in VB.NET 2010

    - by neil chen
    When I'm reading SICP (Structure and Interpretation of Computer Programs) recently, I'm very interested in the concept of an "Nondeterministic Algorithm". According to wikipedia:  In computer science, a nondeterministic algorithm is an algorithm with one or more choice points where multiple different continuations are possible, without any specification of which one will be taken. For example, here is an puzzle came from the SICP: Baker, Cooper, Fletcher, Miller, and Smith live on different floors of an apartment housethat contains only five floors. Baker does not live on the top floor. Cooper does not live onthe bottom floor. Fletcher does not live on either the top or the bottom floor. Miller lives ona higher floor than does Cooper. Smith does not live on a floor adjacent to Fletcher's.Fletcher does not live on a floor adjacent to Cooper's. Where does everyone live? After reading this I decided to build a simple nondeterministic calculation engine with .NET. The rough idea is that we can use an iterator to track each set of possible values of the parameters, and then we implement some logic inside the engine to automate the statemachine, so that we can try one combination of the values, then test it, and then move to the next. We also used a backtracking algorithm to go back when we are running out of choices at some point. Following is the core code of the engine itself: Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Public Class NonDeterministicEngine Private _paramDict As New List(Of Tuple(Of String, IEnumerator)) 'Private _predicateDict As New List(Of Tuple(Of Func(Of Object, Boolean), IEnumerable(Of String))) Private _predicateDict As New List(Of Tuple(Of Object, IList(Of String))) Public Sub AddParam(ByVal name As String, ByVal values As IEnumerable) _paramDict.Add(New Tuple(Of String, IEnumerator)(name, values.GetEnumerator())) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(1, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(2, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(3, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(4, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(5, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(6, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(7, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(8, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Sub CheckParamCount(ByVal count As Integer, ByVal paramNames As IList(Of String)) If paramNames.Count <> count Then Throw New Exception("Parameter count does not match.") End If End Sub Public Property IterationOver As Boolean Private _firstTime As Boolean = True Public ReadOnly Property Current As Dictionary(Of String, Object) Get If IterationOver Then Return Nothing Else Dim _nextResult = New Dictionary(Of String, Object) For Each item In _paramDict Dim iter = item.Item2 _nextResult.Add(item.Item1, iter.Current) Next Return _nextResult End If End Get End Property Function MoveNext() As Boolean If IterationOver Then Return False End If If _firstTime Then For Each item In _paramDict Dim iter = item.Item2 iter.MoveNext() Next _firstTime = False Return True Else Dim canMoveNext = False Dim iterIndex = _paramDict.Count - 1 canMoveNext = _paramDict(iterIndex).Item2.MoveNext If canMoveNext Then Return True End If Do While Not canMoveNext iterIndex = iterIndex - 1 If iterIndex = -1 Then Return False IterationOver = True End If canMoveNext = _paramDict(iterIndex).Item2.MoveNext If canMoveNext Then For i = iterIndex + 1 To _paramDict.Count - 1 Dim iter = _paramDict(i).Item2 iter.Reset() iter.MoveNext() Next Return True End If Loop End If End Function Function GetNextResult() As Dictionary(Of String, Object) While MoveNext() Dim result = Current If Satisfy(result) Then Return result End If End While Return Nothing End Function Function Satisfy(ByVal result As Dictionary(Of String, Object)) As Boolean For Each item In _predicateDict Dim pred = item.Item1 Select Case item.Item2.Count Case 1 Dim p1 = DirectCast(pred, Func(Of Object, Boolean)) Dim v1 = result(item.Item2(0)) If Not p1(v1) Then Return False End If Case 2 Dim p2 = DirectCast(pred, Func(Of Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) If Not p2(v1, v2) Then Return False End If Case 3 Dim p3 = DirectCast(pred, Func(Of Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) If Not p3(v1, v2, v3) Then Return False End If Case 4 Dim p4 = DirectCast(pred, Func(Of Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) If Not p4(v1, v2, v3, v4) Then Return False End If Case 5 Dim p5 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) If Not p5(v1, v2, v3, v4, v5) Then Return False End If Case 6 Dim p6 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) If Not p6(v1, v2, v3, v4, v5, v6) Then Return False End If Case 7 Dim p7 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) Dim v7 = result(item.Item2(6)) If Not p7(v1, v2, v3, v4, v5, v6, v7) Then Return False End If Case 8 Dim p8 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) Dim v7 = result(item.Item2(6)) Dim v8 = result(item.Item2(7)) If Not p8(v1, v2, v3, v4, v5, v6, v7, v8) Then Return False End If Case Else Throw New NotSupportedException End Select Next Return True End FunctionEnd Class    And now we can use the engine to solve the problem we mentioned above:   Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Sub Test2() Dim engine = New NonDeterministicEngine() engine.AddParam("baker", {1, 2, 3, 4, 5}) engine.AddParam("cooper", {1, 2, 3, 4, 5}) engine.AddParam("fletcher", {1, 2, 3, 4, 5}) engine.AddParam("miller", {1, 2, 3, 4, 5}) engine.AddParam("smith", {1, 2, 3, 4, 5}) engine.AddRequire(Function(baker) As Boolean Return baker <> 5 End Function, {"baker"}) engine.AddRequire(Function(cooper) As Boolean Return cooper <> 1 End Function, {"cooper"}) engine.AddRequire(Function(fletcher) As Boolean Return fletcher <> 1 And fletcher <> 5 End Function, {"fletcher"}) engine.AddRequire(Function(miller, cooper) As Boolean 'Return miller = cooper + 1 Return miller > cooper End Function, {"miller", "cooper"}) engine.AddRequire(Function(smith, fletcher) As Boolean Return smith <> fletcher + 1 And smith <> fletcher - 1 End Function, {"smith", "fletcher"}) engine.AddRequire(Function(fletcher, cooper) As Boolean Return fletcher <> cooper + 1 And fletcher <> cooper - 1 End Function, {"fletcher", "cooper"}) engine.AddRequire(Function(a, b, c, d, e) As Boolean Return a <> b And a <> c And a <> d And a <> e And b <> c And b <> d And b <> e And c <> d And c <> e And d <> e End Function, {"baker", "cooper", "fletcher", "miller", "smith"}) Dim result = engine.GetNextResult() While Not result Is Nothing Console.WriteLine(String.Format("baker: {0}, cooper: {1}, fletcher: {2}, miller: {3}, smith: {4}", result("baker"), result("cooper"), result("fletcher"), result("miller"), result("smith"))) result = engine.GetNextResult() End While Console.WriteLine("Calculation ended.")End Sub   Also, this engine can solve the classic 8 queens puzzle and find out all 92 results for me.   Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Sub Test3() ' The 8-Queens problem. Dim engine = New NonDeterministicEngine() ' Let's assume that a - h represents the queens in row 1 to 8, then we just need to find out the column number for each of them. engine.AddParam("a", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("b", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("c", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("d", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("e", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("f", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("g", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("h", {1, 2, 3, 4, 5, 6, 7, 8}) Dim NotInTheSameDiagonalLine = Function(cols As IList) As Boolean For i = 0 To cols.Count - 2 For j = i + 1 To cols.Count - 1 If j - i = Math.Abs(cols(j) - cols(i)) Then Return False End If Next Next Return True End Function engine.AddRequire(Function(a, b, c, d, e, f, g, h) As Boolean Return a <> b AndAlso a <> c AndAlso a <> d AndAlso a <> e AndAlso a <> f AndAlso a <> g AndAlso a <> h AndAlso b <> c AndAlso b <> d AndAlso b <> e AndAlso b <> f AndAlso b <> g AndAlso b <> h AndAlso c <> d AndAlso c <> e AndAlso c <> f AndAlso c <> g AndAlso c <> h AndAlso d <> e AndAlso d <> f AndAlso d <> g AndAlso d <> h AndAlso e <> f AndAlso e <> g AndAlso e <> h AndAlso f <> g AndAlso f <> h AndAlso g <> h AndAlso NotInTheSameDiagonalLine({a, b, c, d, e, f, g, h}) End Function, {"a", "b", "c", "d", "e", "f", "g", "h"}) Dim result = engine.GetNextResult() While Not result Is Nothing Console.WriteLine("(1,{0}), (2,{1}), (3,{2}), (4,{3}), (5,{4}), (6,{5}), (7,{6}), (8,{7})", result("a"), result("b"), result("c"), result("d"), result("e"), result("f"), result("g"), result("h")) result = engine.GetNextResult() End While Console.WriteLine("Calculation ended.")End Sub (Chinese version of the post: http://www.cnblogs.com/RChen/archive/2010/05/17/1737587.html) Cheers,  

    Read the article

  • Error in running script [closed]

    - by SWEngineer
    I'm trying to run heathusf_v1.1.0.tar.gz found here I installed tcsh to make build_heathusf work. But, when I run ./build_heathusf, I get the following (I'm running that on a Fedora Linux system from Terminal): $ ./build_heathusf Compiling programs to build a library of image processing functions. convexpolyscan.c: In function ‘cdelete’: convexpolyscan.c:346:5: warning: incompatible implicit declaration of built-in function ‘bcopy’ [enabled by default] myalloc.c: In function ‘mycalloc’: myalloc.c:68:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘mymalloc’: myalloc.c:101:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘myfree’: myalloc.c:129:27: error: invalid storage class for function ‘find_link’ myalloc.c:131:12: warning: assignment makes pointer from integer without a cast [enabled by default] myalloc.c: At top level: myalloc.c:150:13: warning: conflicting types for ‘store_link’ [enabled by default] myalloc.c:150:13: error: static declaration of ‘store_link’ follows non-static declaration myalloc.c:91:4: note: previous implicit declaration of ‘store_link’ was here myalloc.c:164:24: error: conflicting types for ‘find_link’ myalloc.c:131:14: note: previous implicit declaration of ‘find_link’ was here Building the mammogram resizing program. gcc -O2 -I. -I../common mkimage.o -o mkimage -L../common -lmammo -lm ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [mkimage] Error 1 Building the breast segmentation program. gcc -O2 -I. -I../common breastsegment.o segment.o -o breastsegment -L../common -lmammo -lm breastsegment.o: In function `render_segmentation_sketch': breastsegment.c:(.text+0x43): undefined reference to `mycalloc' breastsegment.c:(.text+0x58): undefined reference to `mycalloc' breastsegment.c:(.text+0x12f): undefined reference to `mycalloc' breastsegment.c:(.text+0x1b9): undefined reference to `myfree' breastsegment.c:(.text+0x1c6): undefined reference to `myfree' breastsegment.c:(.text+0x1e1): undefined reference to `myfree' segment.o: In function `find_center': segment.c:(.text+0x53): undefined reference to `mycalloc' segment.c:(.text+0x71): undefined reference to `mycalloc' segment.c:(.text+0x387): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x4ac): undefined reference to `mycalloc' segment.c:(.text+0x546): undefined reference to `mycalloc' segment.c:(.text+0x651): undefined reference to `mycalloc' segment.c:(.text+0x691): undefined reference to `myfree' segment.o: In function `estimate_tissue_image': segment.c:(.text+0x10d4): undefined reference to `mycalloc' segment.c:(.text+0x14da): undefined reference to `mycalloc' segment.c:(.text+0x1698): undefined reference to `mycalloc' segment.c:(.text+0x1834): undefined reference to `mycalloc' segment.c:(.text+0x1850): undefined reference to `mycalloc' segment.o:segment.c:(.text+0x186a): more undefined references to `mycalloc' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x1bbc): undefined reference to `myfree' segment.c:(.text+0x1c4a): undefined reference to `mycalloc' segment.c:(.text+0x1c7c): undefined reference to `mycalloc' segment.c:(.text+0x1d8e): undefined reference to `myfree' segment.c:(.text+0x1d9b): undefined reference to `myfree' segment.c:(.text+0x1da8): undefined reference to `myfree' segment.c:(.text+0x1dba): undefined reference to `myfree' segment.c:(.text+0x1dc9): undefined reference to `myfree' segment.o:segment.c:(.text+0x1dd8): more undefined references to `myfree' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x20bf): undefined reference to `mycalloc' segment.o: In function `segment_breast': segment.c:(.text+0x24cd): undefined reference to `mycalloc' segment.o: In function `find_center': segment.c:(.text+0x3a4): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x6ac): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label': cc_label.c:(.text+0x20c): undefined reference to `mycalloc' cc_label.c:(.text+0x6c2): undefined reference to `mycalloc' cc_label.c:(.text+0xbaa): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label_0bkgd': cc_label.c:(.text+0xe17): undefined reference to `mycalloc' cc_label.c:(.text+0x12d7): undefined reference to `mycalloc' cc_label.c:(.text+0x17e7): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x18c5): undefined reference to `mycalloc' ../common/libmammo.a(cc_label.o): In function `cc_label_4connect': cc_label.c:(.text+0x1cf0): undefined reference to `mycalloc' cc_label.c:(.text+0x2195): undefined reference to `mycalloc' cc_label.c:(.text+0x26a4): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x1b06): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [breastsegment] Error 1 Building the mass feature generation program. gcc -O2 -I. -I../common afumfeature.o -o afumfeature -L../common -lmammo -lm afumfeature.o: In function `afum_process': afumfeature.c:(.text+0xd80): undefined reference to `mycalloc' afumfeature.c:(.text+0xd9c): undefined reference to `mycalloc' afumfeature.c:(.text+0xe80): undefined reference to `mycalloc' afumfeature.c:(.text+0x11f8): undefined reference to `myfree' afumfeature.c:(.text+0x1207): undefined reference to `myfree' afumfeature.c:(.text+0x1214): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [afumfeature] Error 1 Building the mass detection program. make: Nothing to be done for `all'. Building the performance evaluation program. gcc -O2 -I. -I../common DDSMeval.o polyscan.o -o DDSMeval -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [DDSMeval] Error 1 Building the template creation program. gcc -O2 -I. -I../common mktemplate.o polyscan.o -o mktemplate -L../common -lmammo -lm Building the drawimage program. gcc -O2 -I. -I../common drawimage.o -o drawimage -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [drawimage] Error 1 Building the compression/decompression program jpeg. gcc -O2 -DSYSV -DNOTRUNCATE -c lexer.c lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyin’) lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyout’) lexer.c: In function ‘initparser’: lexer.c:387:21: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c: In function ‘MakeLink’: lexer.c:443:16: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:447:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:452:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:455:34: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:458:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:460:3: warning: incompatible implicit declaration of built-in function ‘strcpy’ [enabled by default] lexer.c: In function ‘getstr’: lexer.c:548:26: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:552:4: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:557:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:557:28: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c:561:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘parser’: lexer.c:794:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:798:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1074:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1078:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1116:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1120:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1154:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1158:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1190:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1247:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1251:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1283:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘yylook’: lexer.c:1867:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1867:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:23: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] make: *** [lexer.o] Error 1

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >