Search Results

Search found 332 results on 14 pages for 'austin anderson'.

Page 13/14 | < Previous Page | 9 10 11 12 13 14  | Next Page >

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • Java EE @ No Fluff Just Stuff Tour

    - by reza_rahman
    If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. This is most certainly true for US enterprise Java developers in particular. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. Many now famous world class technology speakers can trace their humble roots to NFJS. Via NFJS you basically get to have amazing training without paying for an expensive venue, lodging or travel. The events are usually on the weekends so you don't need to even skip work if you want (a great feature for consultants on tight budgets and deadlines). I am proud to share with you that I recently joined the NFJS troupe. My hope is that this will help solve the lingering problem of effectively spreading the Java EE message here in the US. For NFJS I hope my joining will help beef up perhaps much desired Java content. In any case, simply being accepted into this legendary program is an honor I could have perhaps only dreamed of a few years ago. I am very grateful to Jay Zimmerman for seeing the value in me and the Java EE content. The current speaker line-up consists of the likes of Neal Ford, Venkat Subramaniam, Nathaniel Schutta, Tim Berglund and many other great speakers. I actually had my tour debut on April 4-5 with the NFJS New York Software Symposium - basically a short train commute away from my home office. The show is traditionally one of the smaller ones and it was not that bad for a start. I look forward to doing a few more in the coming months (more on that a bit later). I had four talks back to back (really my most favorite four at the moment). The first one was a talk on JMS 2 - some of you might already know JMS is one of my most favored Java EE APIs. The slides for the talk are posted below: What’s New in Java Message Service 2 from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third talk I delivered was our flagship Java EE 7/8 talk. As you may know, currently the talk is basically about Java EE 7. I'll probably slowly evolve this talk to gradually transform it into a Java EE 8 talk as we move forward (I'll blog about that separately shortly). The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman My last talk for the show was my JavaScript+Java EE 7 talk. This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman Unsurprisingly this talk was well-attended. The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. My next NFJS show is the Central Ohio Software Symposium in Columbus on June 6-8 (sorry for the late notice - it's been a really crazy few weeks). Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: June 6 - 8, Columbus Ohio. June 24 - 27, Denver Colorado (UberConf) - my most extensive agenda on the tour so far. July 18 - 20, Austin Texas. I hope you'll take this opportunity to get some updates on Java EE as well as the other awesome content on the tour?

    Read the article

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • Remove duplicates from a list of nested dictionaries

    - by user2924306
    I'm writing my first python program to manage users in Atlassian On Demand using their RESTful API. I call the users/search?username= API to retrieve lists of users, which returns JSON. The results is a list of complex dictionary types that look something like this: [ { "self": "http://www.example.com/jira/rest/api/2/user?username=fred", "name": "fred", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=fred", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=fred", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=fred", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=fred" }, "displayName": "Fred F. User", "active": false }, { "self": "http://www.example.com/jira/rest/api/2/user?username=andrew", "name": "andrew", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=andrew", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=andrew", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=andrew", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=andrew" }, "displayName": "Andrew Anderson", "active": false } ] I'm calling this multiple times and thus getting duplicate people in my results. I have been searching and reading but cannot figure out how to deduplicate this list. I figured out how to sort this list using a lambda function. I realize I could sort the list, then iterate and delete duplicates. I'm thinking there must be a more elegant solution. Thank you!

    Read the article

  • What is wrong with the JavaScript event handling in this example? (Using click() and hover() jQuery

    - by Bungle
    I'm working on a sort of proof-of-concept for a project that approximates Firebug's inspector tool. For more details, please see this related question. Here is the example page. I've only tested it in Firefox: http://troy.onespot.com/static/highlight.html The idea is that, when you're mousing over any element that can contain text, it should "highlight" with a light gray background to indicate the boundaries of that element. When you then click on the element, it should alert() a CSS selector that matches it. This is somewhat working in the example linked above. However, there's one fundamental problem. When mousing over from the top of the page to the bottom, it will pick up the paragraphs, <h1> element, etc. But, it doesn't get the <div>s that encompass those paragraphs. However, for example, if you "sneak up" on the <div> that contains the two paragraphs "The area was settled..." and "Austin was selected..." from the left - tracing down the left edge of the page and entering the <div> just between the two paragraphs (see this screenshot) - then it is picked up. I assume this has something to do with the fact that I haven't attached an event handler to the <body> element (where you're entering the <div> from if you enter from the left), but I have attached handlers to the <p>s (where you're entering from if you come from the top or bottom). There are also other issues with mousing in and out elements - background colors that "stick" and the like - that I think are also related. As indicated in the related question posted above, I suspect there is something about event bubbling that I don't understand that is causing unexpected behavior. Can anyone spot what's wrong with my code? Thanks in advance for any help!

    Read the article

  • GPO Software Uninstall Not Taking Place

    - by burmat
    I am having some trouble with my software GPO's and can't seem to find any answers using Google. I successfully deployed software using my policy but when I delete another, the uninstallation of the software does not take place. What I did: Deployed software using a GPO, used gpupdate /force on the workstation to update, reboot, and install the software Deleted another software installation by: Right-Click All Tasks Remove 'Immediately uninstall the software from users and computers' From there, I did another gpupdate /force to try and get the GPO to refresh and uninstall the software on the workstation. This did not work. I then forced replication between my domain controllers and ran another gpupdate /force on the workstation and this did not uninstall the software. There are not error logs or indications that the uninstall is being triggered when I go into the event viewer, and I know for a fact that the policy is working in other aspects. So my questions is: Where do I look next to find the answer as to why GPO software deployments are working but un-installations are not, based off of what I have already tried? Thank you in advance. UPDATE: After using gpresult /z, there is no indication of a pending un-installation or removal of software. Under the section entitled "Software Installations", the software I am trying to uninstall is not listed. There is no other indication that the software I am trying to uninstall even exists. I also turned on RSoP logging and did (yet another) gpupdate /force to yield no blatant results. There is no indication that an uninstall event was even triggered, let alone incapability or failure. Although I am sure I marked it to uninstall in case of two events (the falling out of the scope of management, as well as the removal of the entry), I am beginning to think the entry just never triggered something that should have been triggered. UPDATE #2: After troubleshooting this (frustrating) application assignment, I have chalked it up as a fluke. I have tested with other software to make sure that the uninstall of other application assignments is actually working, so I am assuming it is something related to the package directly. There is the possibility that my problem resides in something related to what @joeqwerty linked in a comment below but because I can't go back in time, I don't think I will be able to prove it. I will probably be running a script via another GPO to guarantee the un-installation of left over package installs. For now, Evan Anderson is getting the answer because of the debugging information I was able to put to good use. Thank you to everyone that helped contribute so far!

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • Upcoming EMEA, APAC & US Events with MySQL in 2014

    - by Lenka Kasparova
    As an update to the previous announcement from Mar 25, 2014 please find below the updated list of events where MySQL Community team is attending and/or supporting. This time you can find not only EMEA & APAC ones but also conferences & events we are covering in the US & Canada. You are invited to meet our engineers at the events below.   EMEA  NEW!! BGOUG, Sandanski, Bulgaria, June 13, 2014  Georgi Kodinov will attend and speak at this local Oracle User Group event. Feel free to come. PHP Tour Lyon, Lyon, France, June 23-24, 2014 MySQL team is going to be part of this show as well, we are not going to have a booth here but very active networking by our french MySQL team around the event. Come to meet us and talk to us! NEW!! Converge Conference, Glasgow, Scotland, August 15-16, 2014  MySQL Community Manager, David Stokes attends with MySQL talk. NEW!! CakeFest, Madrid, Spain, August 21-24, 2014  A talk on "Scaling Your MySQL instances AND keeping your Sanity" will be given by the MySQL Community Manager, David Stokes. Froscon 2014, St.Augustin, Germany, August 23-24, 2014 Please visit our booth as well as watch the Froscon website for the schedule updates. NEW!! SymfonyLive, UK, London, September 25-26, 2014 MySQL Community Magers, David Stokes & Morgan Tocker submitted MySQL talks for this show. Schedule will be announced later on. DrupalCon Amsterdam, The Netherlands, September 29-Oct 3, 2014 Meet us at our booth at DrupalCon Amsterdam. For the schedule please watch the DrupalCon website. All Your Base, Oxford UK, October 17, 2014  Come to visit our MySQL booth and talk to our MySQL experts. NEW!! WebTechCon / IPC, Munich Germany, October 26-29, 2014 NEW!! DOAG, Nuremberg, Germany, November 18-20, 2014 There will be a full day of MySQL talks and one full day of MySQL workshop & sessions with live demo. This event is simply hard to miss! NEW!! Forum PHP Paris, France, November 21-22, 2014 More details: TBD NEW!! UK OUG, Liverpool, UK, December 8-10, 2014 MySQL will be part of the Oracle booth and we hope to get more space for MySQL talks.  USA NEW!! Texas Linux Fest, Austin, Texas, US, June 13-14, 2014 NEW!! SouthEast Linux Fest, Charlotte, US, June 20-22, 2014 NEW!! Debian Conference 2014, Portland, OR, US, August 23-31, 2014 NEW!! FossetCon, Orlando, US, September 11-13, 2014 NEW!! Oracle Open World, San Francisco, US, September 29-October 3, 2014 NEW!! MySQL Central @ Open/World, San Francisco, US, September 29-October 3, 2014 NEW!! PyTexas 2014, Dallas, TX, US, October 3-5, 2014 NEW!! All Things Open (replacing POSSCON), Raleigh, NC, October 23-24, 2014 NEW!! Ohio LinuxFest 2014, Columbus, Ohio, US, October 24-25, 2014 NEW!! ZendCon PHP, Santa Clara, US, October 27-30, 2014 NEW!! Kuali Days 2014, Indianapolis, US, November 10-13, 2014 NEW!! Live 360, Orlando, FL, US, November 17-20, 2014 APAC OpenSourceConference Japan, Hokkaido, June 13-14, 2014 MySQL is represented by Ryusuke Kajiyama with the talk on "MySQL Technology Updates". NEW!! db tech showcase, Osaka Japan, June 18-20, 2014 Three MySQL talks are scheduled for this show, "MySQL for Oracle DBA" & "MySQL Technology Updates" by Ryusuke Kajiyama. The last talk will be on MySQL Fabric by Yoshiaki Yamasaki. NEW!! PyCon Singapore, Singapore, June 18-20, 2014 Ryusuke Kajiyama will be talking about "Sharding and scale-out using Python-based MySQL Fabric". NEW!! COSCUP, Taipei, Taiwan, July 19-20, 2014 We are going to run a technical session on MySQL Workbench & one talk on how to make MySQL better MySQL. NEW!! PyCon New Zealand, Wellington, New Zealand, September 13-14, 2014 MySQL talks were submitted as well as one talk by Solaris Modernization team on Python & Solaris, watch the website for schedule updates. NEW!! PyCon Japan, Tokyo Japan, September 13-15, 2014 MySQL will be a MySQL session speaker, no schedule is announced yet. Ruby Kaigi, Tokyo, Japan, September 18-20, 2014 Another event MySQL supports and attends in APAC region. Ruby Kaigi is the international Ruby Conference in Japan, Tokyo. Ruby started in Japan, so Ruby Kaigi has excellent speakers and developers! MySQL team is going to be present at this conference with MySQL talks and active networking around the venue. NEW!! PyCon India, Bangalore, India, September 26-28, 2014 A MySQL talk on "MySQL Utilities scaling MySQL with Python" has been submitted, please watch the PyCon website for the schedule updates. NEW!! OpenSourceConference Japan, Tokyo, October 18-19, 2014 NEW!! OpenSource India, Bengaluru, India, November 7-8, 2014 NEW!! OpenSourceConference Japan, Fukuoka, November 14-15, 2014 You can check the MySQL wikis for updates on the conferences we are attending. Next time I hope to have more details for each event above (especially for the US ones).

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • OTN Architect Day Headed to Reston, VA - May 16

    - by Bob Rhubart
    In 2011 OTN Architect Day made stops in Chicago, Denver, Phoenix, Redwood Shores, and Toronto. The 2012 series begins with OTN Architect Day in Reston, VA on Wednesday May 16. Registration is now open for this free event, but don't get caught napping -- seating is limited, and the event is just 5 weeks away. The information below reflects the most recent updates to the event agenda, including the addition of Oracle ACE Director Kai Yu as the guest keynote speaker. Kai is Senior System Engineer / Architect at Dell, Inc., and has been very busy of late as a speaker at various industry and Oracle User Group events. I'm very happy Kai has agreed to make the trek from his hometown in Austin, TX to share his insight at the Architect Day event in Reston.  If you're in the area, put this one on your calendar. You won't be sorry.   Venue Sheraton Reston Hotel 11810 Sunrise Valley Drive Reston, VA 20191 Event Agenda 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossman Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. 10:00 am - 10:30 am High Availability Infrastructure for Cloud Computing | Kai Yu Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and the related best practices that system architects should consider in cloud infrastructure design to ensure high availability. 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions: (pick one) Innovations in Grid Computing with Oracle Coherence | Bjorn Boe Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. 11:30 pm - 12:15 pm Breakout Sessions: (pick one) Oracle Enterprise Manager | Joe Diemer Oracle Enterprise Manager (EM) provides complete lifecycle management for the cloud - from automated cloud setup to self-service delivery to cloud operations. In this session you'll learn how to take control of your cloud infrastructure with EM features including Consolidation Planning and Self-Service provisioning with Metering and Chargeback. Come hear how Oracle is expanding its management capabilities into the cloud! Rationalization and Defense in Depth - Two Steps Closer to the Clouds | Dave Chappelle Security represents one of the biggest concerns about cloud computing. In this session we'll get past the FUD with a real-world look at some key issues. We'll discuss the infrastructure necessary to support rationalization and security services, explore architecture for defense -in-depth, and deal frankly with the good, the bad, and the ugly in Cloud security. 12:15 pm - 1:15 pm Lunch 1:40 pm - 2:00 pm Panel Discussion - Q&A 2:00 pm - 2:45 pm Breakout Sessions: (pick one) 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Track B: Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation gives an overview of Oracle's Cloud Reference Architecture, which is part of the Cloud Enterprise Technology Strategy (ETS). Concepts covered include common management layer capabilities, service models, resource pools, and use cases. 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussions 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtable 4:15 pm - 5:00 pm Cocktail Reception / Networking Session schedule and content subject to change.

    Read the article

  • Oracle RightNow CX for Good Customer Experiences

    - by Andreea Vaduva
    Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers. One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences. Establish a knowledge foundation Empowering the customer Empower employees Offer multi-channel choice Listen to the customer Design seamless experiences Engage proactively Measure and improve continuously The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer. The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing. Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve. The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes. Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback. It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars. It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer. The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post. This leaves step 8 – Measuring and Improving: How does a business know whether it’s doing the right thing? How does it know if its customers are happy? How does it know if its staff are being productive? How does it know if its staff are being effective? Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease. Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered: RightNow Customer Service Administration Rel 12.02 (3 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course familiarises users with the tasks and concepts needed to configure and maintain their system. RightNow Customer Portal Designer and Contact Center Experience Designer Administration Rel 12.02 (2 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course introduces basic CP structure and how to make changes to the look, feel and behaviour of their self-service pages RightNow Analytics Rel 12.02 (2 days) Available as In Class, Live Virtual Class and Training On Demand (Release 11.11 is available as In Class and Live Virtual Class) This course equips users with the skills necessary to understand data supplied by standard reports and to create custom reports RightNow Integration and Customization For Developers Rel 12.02 (5-days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course is for experienced web developers and offers an introduction to Add-In development using the Desktop Add-In Framework and introduces the core knowledge that developers need to begin integrating Oracle RightNow CX with other systems A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team. On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality. I’m a fan of the application, and I think I’ve made that clear: It’s completely geared up to providing customers with support at point of need. It can be configured to meet even the most stringent of business requirements. Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible. About the Author: Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses: RightNow Customer Service Administration RightNow Analytics RightNow Customer Portal Designer and Contact Center Experience Designer Administration RightNow Marketing and Feedback

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • CodePlex Daily Summary for Sunday, May 23, 2010

    CodePlex Daily Summary for Sunday, May 23, 2010New ProjectsA2Command: Apple 2 port of CBM-Command (http://cbmcommand.codeplex.com)AgUnit: AgUnit is a plugin for Jetbrains ReSharper (R#) that allows you to run and debug Silverlight unit tests from within Visual Studio.BSonPosh Powershell Module: A collection of useful Powershell functions I have written and collected over the years. It is a Powershell v2 Module composed of mostly scripts.DB Restriker: Simple tool for lookup, parsing, searching some standard databases using wildcards and pattern recognition.Entity Framework Repository & Unit of Work Template: T4 Template for Entity Framework 4 for creating a data access layer using the repository and unit of work patterns. Designed to work well with dep...Fiction Catalog: A catalog project designed to store information about fictional literature.Giving a Presentation: Useful for people doing presentations, this application hides desktop icons, disables screensaver, closes chosen programs when presentation starts,...glueless: Glueless is a local message bus which allows architect to design highly decoupled systems and applications. Glueless is a step beyond dependency i...HtmlCodeIt: Take any code and format it so that it can be viewed properly on a web browser, blog post or website.just testproject :): just have a test!KanbanTaskboard: The aim of the project is to design and implement a functional prototype for visualizing and operating a multi-platform virtual "Kanban Taskboard”Life System: Life SystemOaSys Project: Project Oasys is a project that aims to help solve desertification. Scoring of pingPong Game: Scoring of pingPong GameSilverlight Web Comic: The Silverlight Web Comic makes easier for the people create your own comic with your own pictures o drawings, and add the globes of text like the ...TickSharp: C# Wrapper for http://TickSpot.com RESTful API.Traductor: El Traductor es una aplicación de escritorio para traducción de frases entre distintos idiomas basada en la plataforma Silverlight Out Of Browser y...WatchersNET.SkinObjects.ModulActionsMenu: Displays the Module Actions Menu as a Unsorted CSS Menu.xxfd1r4w96: testingNew ReleasesAgUnit: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1.zip into the "Bin\Plugins\" folder of your ReSharper installation (default C:...ASP.NET MVC | SCAFFOLD: ASP.NET MVC SCAFFOLD - Beta 1.0: Release versão betaBizTalk Server 2006 Documenter: Documenter_v3.4.0.0: This is the new release of the documenter which has the following highlights Support for 64 bit systems Support for SxS scenarios (so now the sys...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 Beta 2- VS 2008 Replacement: The CassiniDev Visual Studio build is a fully compatibly Visual Studio 2008/2010 Development server drop-in replacement with all CassiniDev enhance...CBM-Command: 2010-05-22 Beta: Release Notes - 2010-05-22 BetaNew Features Simple text file viewer. Now when you use SHIFT-RETURN to open a file, it will ask if you want to view...Easy Validation: Documentation: Documentation for easyVal was created and presented at University of Texas at Austin in May of 2010.Entity Framework Repository & Unit of Work Template: 1.0: Initial ReleaseFrotz.NET: FrotzNet 1.0 beta: Many, many changes, including: - Got Adaptive Palette working for graphics - Got undo working - Implemented all zcodes - Added scripting as well as...Giving a Presentation: CTP: This release includes basic extensibility infrastructure and three extensions: hides desktop icons, disables screensaver, closes chosen programs wh...Gov 2.0 Kit: SharePoint 2010 MyPeeps Mysite Accelerators: SharePoint 2010 MyPeeps Mysite Accelerators. Attached are the installation and documentations files.HKGolden Express: HKGoldenExpress (Build 201005221900): New features: (None) Bug fix: Hong Kong special characters now can be posted without encoding problem. Improvements: (None) Other changes: (None) K...Intellibox - A WPF auto complete textbox search control: Beta 2: Updated the namespace of the Intellibox control from "System.Windows.Controls" to "FeserWard.Controls". Empty binding Path properties now work on...MDownloader: MDownloader-0.15.14.59111: Fixed DepositFile provider. Fixed FileFactory provider. Added simple fakeness detector (can check if .rar, .zip, .7z files have valid signature...Mute4: V1: Initial version of Mute4NLog - Advanced .NET Logging: Nightly Build 2010.05.22.003: Changes since the last build:No changes. Unit test results:Passed 191/191 (100%) Passed 191/191 (100%) Passed 214/214 (100%) Passed 216/216 (100%)...NSIS Autorun: NSIS Autorun 0.1.9: This release includes source code, executable binaries and example materials.Silverlight Gantt Chart: Silverlight Gantt Chart 1.3 (SL4): The latest release mainly makes the Gantt Chart useful in Silverlight 4 applications.SqlServerExtensions: V 0.2 beta: V 0.2 Beta release: New features available TrimStart - trim leading characters TrimEnd - trim trailing characters Remove - remove characters f...Traductor: Version 3.1: Nuevo en esta versión: El Traductor ahora permite escoger entre los motores de Microsoft y Google. El Text to Speech is es ahora habilitado por...VCC: Latest build, v2.1.30522.0: Automatic drop of latest buildVDialer Add-In for Outlook 2007 & 2010 - Dial your Vonage phone from Outlook: VDialer Add-In 1.0.3: This release adds new features related to Journal and use of Vonage API Changes in version 1.0.3 Added configurable option to automatically open J...WatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.00: First Release For Informations How To Install, the Skin Object Read the DocumentationMost Popular ProjectsCodeComment.NETRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrpatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlightpatterns & practices: Windows Azure Security GuidanceCassiniDev - Cassini 3.5/4.0 Developers EditionGMap.NET - Great Maps for Windows Forms & PresentationNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSQL Server PowerShell ExtensionsBlogEngine.NETCodeReview

    Read the article

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am new to Hibernate world. It may be a silly question, but I am not able to solve it. I am testing many to One relationship of tables and trying to insert record. I have a Department table and Employee table. Employee and Dept has many to One relationship here. I am using Fluent NHibernate to add records. All codes below. Pls help - SQL Code create table Dept ( Id int primary key identity, DeptName varchar(20), DeptLocation varchar(20)) create table Employee ( Id int primary key identity, EmpName varchar(20),EmpAge int, DeptId int references Dept(Id)) Class Files public partial class Dept { public virtual System.String DeptLocation { get; set; } public virtual System.String DeptName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual IList<Employee> Employees { get; set; } } public partial class Employee { public virtual System.Int32 DeptId { get; set; } public virtual System.Int32 EmpAge { get; set; } public virtual System.String EmpName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual Project.Model.Dept Dept { get; set; } } Mapping Files public class DeptMapping : ClassMap { public DeptMapping() { Id(x = x.Id); Map(x = x.DeptName); Map(x = x.DeptLocation); HasMany(x = x.Employees) .Inverse() .Cascade.All(); } } public class EmployeeMapping : ClassMap { public EmployeeMapping() { Id(x = x.Id); Map(x = x.EmpName); Map(x = x.EmpAge); Map(x = x.DeptId); References(x = x.Dept) .Cascade.None(); } } My Code to add try { Dept dept = new Dept(); dept.DeptLocation = "Austin"; dept.DeptName = "Store"; Employee emp = new Employee(); emp.EmpName = "Ron"; emp.EmpAge = 30; IList<Employee> empList = new List<Employee>(); empList.Add(emp); dept.Employees = empList; emp.Dept = dept; IRepository<Dept> rDept = new Repository<Dept>(); rDept.SaveOrUpdate(dept); } catch (Exception ex) { Console.WriteLine(ex.Message); } Here i am getting error as InnerException = {"Invalid column name 'Dept_id'."} Message = "could not insert: [Project.Model.Employee][SQL: INSERT INTO [Employee] (EmpName, EmpAge, DeptId, Dept_id) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY()]"

    Read the article

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • So&hellip; What is a SharePoint Developer?

    - by Mark Rackley
    A few days ago Stacy Draper and I were chatting about what it means to be a SharePoint Developer. That actually turns about to be a conversation with lots of shades of grey. Stacy thought it would make a good blog post… well, I can’t promise this to be a GOOD blog post… So, anyway, I decided to let off a little bomb this morning by posting the following tweet on Twitter: @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? Now, I knew this is a debate that has been going on since the first SharePoint Designer User put SharePoint Developer on their resume. There are probably several blogs out there on the subject, but with the wildfire that is jQuery and a few other new features out there I believe it is an important subject to tackle again. I got a lot of great feedback as well on Twitter. The entire twitter conversation is at the end of this blog posting. Thanks everyone for their opinions. Who cares? Why does it matter? Can’t we all just get along? Yes it matters… everything must be labeled and put in it’s proper place. Pigeon holing is the only way to go!  Just kidding.. I’m not near that anal, but yes! It is important to be able to properly identify the skill set of those people on your team and correctly identify the role you are wanting to hire. Saying you are a “SharePoint Developer” is just too vague and just barely begins to answer the question. Also, knowing who’s on your team and what they can do will ensure you give your clients the best people for the job. A Developer writes code right? So, a Developer uses Visual Studio! Whoa, hold on there Sparky. Even if I concede that to be a developer you have to write code then you still can’t say a SharePoint Developer has to use Visual Studio.  So, you can spell C#, how well can you write XSLT? How’s your jQuery? Sorry bud, that’s code whether you like it or not. There are many ways to write code in SharePoint that have nothing to do with cracking open Visual Studio. So, what are the different ways to develop in SharePoint then? How many different ways can you “develop” in SharePoint?? A lot… Out of the box features In SharePoint you can create a site, create a custom list on that site, do basic calculations in a calculated column, set up alerts, and add all sorts of web parts to a page. Let’s face it.. that IS development! javaScript/jQuery Perhaps you’ve heard by now about this thing called jQuery? It’s all over the place and the answer to a lot of people’s prayers. However be careful, with great power comes great responsibility. Remember, javaScript is executed on the client side and if you abuse it your performance could be affected. Also, Marc Anderson (@sympmarc) wrote a pretty awesome javaScript library called SPServices.  This allows you to access SharePoint’s Web Services using jQuery. How freakin cool is that? With these tools at your disposal the number of things you CAN’T do without Visual Studio grows smaller and smaller. This is definitely development no matter what anyone else says and there is no Visual Studio involved. SharePoint Designer Ahhh.. The cause of and the answer to all of your SharePoint development problems. With SharePoint Designer you can use DataView Web Parts, develop (there’s that word again) your branding, and even connect to external datasources.  There’s a lot you can do in SharePoint Designer. It’s got it’s shortcomings, but it is an invaluable tool in the SharePoint developers toolbox. InfoPath So, can InfoPath development really be considered SharePoint development? I would say yes. You can connect to SharePoint lists, populate fields in a SharePoint list, and even write code in InfoPath. Sounds like SharePoint development to me. Visual Studio – Web Services/WCF So, get this. You can write code for SharePoint and not have a clue what the 12 hive is, what “site actions” means, or know how to do ANYTHING in SharePoint? Poppycock! You say? SharePoint Web Services I say… With SharePoint Web Services you can totally interact with SharePoint without knowing anything about SharePoint. I don’t recommend it of course, but it’s possible. What can you write using SharePoint Web Services? How about a little application called SharePoint Designer? Visual Studio – Object Model And here we are finally:  the SharePoint Object Model.  When you hear “SharePoint Developer” most people think of someone opening Visual Studio and creating a custom web part, workflow, event receiver, etc.. etc.. but I hope that by now I have made the point that this is NOT the only form of SharePoint Development! Again… Who cares? Just crack open Visual Studio for everything! Problem solved! Let’s ponder for a moment, shall we? The business comes to you with a requirement that involves some pretty fancy business calculations, and a complicated view that they do NOT want to look like SharePoint. “No Problem” you proclaim you mighty SharePoint Developer. You go back to your cube, chuckle at the latest Dilbert comic, and crack open Visual Studio. Then you build your custom web part… fight with all the deployment, migration, and UAT that you must go through and proclaim victory two weeks later!!!! Well done my good sir/ma’am! Oh wait… it turns out Sally who is not a “developer” did the exact same thing with a Dataview web part and some jQuery and it’s been in production for two weeks? #CockinessFail I know there are many ASP.NET developers out there that can create a custom control and wrap it to be a SharePoint Web Part.  That does NOT mean they are SharePoint Developers though as far as I’m concerned and I personally would much rather have someone on my team that can manipulate the heck (yes, I said ‘heck’) out of SharePoint using Dataview Web Parts, jQuery, and a roll of duct tape. Just because you know how to write code in Visual Studio does not mean you are a SharePoint Developer. What’s the conclusion here? How do we define ‘it’ and what ‘it’ is called? Fortunately, this is MY blog. I don’t have to give answers, I can stir the pot, laugh and leave you to ponder what it means! There is obviously no right or wrong answer here (unless you disagree with me,then you are flat out wrong). Anyway, there are many opinions.  Here’s mine.  If you put SharePoint Developer on your resume make sure to clearly specify HOW you develop in SharePoint and what tools you use. If we must label these gurus of jQuery and SPD, how about “SharePoint Client Developer” or “SharePoint Front End Developer”? Just throwing out an idea. Whatever we call them, to say they are not developers is short-sighted, arrogant, and unfair. Of course, then we need to figure out what to call all those other SharePoint development types.  Twitter Conversation @next_connect: RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? | I say no.... @mikegil:  @mrackley re: yr Developer question: SPD expert <> SP Developer. Can be "sous-developer," though. #SharePoint #SPD @WonderLaura:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? -- My opinion is that devs write code. @exnav29:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? => I think devs would use VS as well @ssKevin:  @WonderLaura @mrackley does that mean strictly vb and c# when it comes to #SharePoint ? @jimmywim:  @exnav29 @mrackley nah, I'd say they were a power user. Devs know their way around the 12 hive ;) @sympmarc:  RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? -> Fighting words. @sympmarc:  @next_connect @mrackley Besides, we prefer to be called "hacks". ;+) @next_connect:  @sympmarc The important thing is that you don't have to develop code to solve problems and create solutions. @mrackley @mrackley:  @sympmarc @next_connect not tryin to pick fight.. just try and find consensus on definition @usher:  @mrackley I'd still argue that you have a DevLite title that's out there for the collaboration engineers (@sympmarc @next_connect) @next_connect: @usher I agree. I've called it Light Dev/ Configuration before. @sympmarc @mrackley @usher:  @next_connect I like DevLite, low calorie but still same great taste :) @mrackley @sympmarc @mrackley:  @next_connect @usher @sympmarc I don't think there's any "lite" to someone who can bend jQuery and XSLT to their will. @usher:  @mrackley okay, so would you refer to someone that writes user controls and assemblies something different (@next_connect @sympmarc) @usher:  @mrackley when looking for a developer that can write .net code, it's a bit different than an XSLT/jQuery designer. @sympmarc @next_connect @jimmywim:  @mrackley @sympmarc @next_connect I reckon a "dev" does managed code and works in the 12 hive @sympmarc:  @jimmywim @mrackley @next_connect We had a similar debate a few days ago @toddbleeker et al @sympmarc:  @sympmarc @jimmywim @mrackley @next_connect @toddbleeker @stevenmfowler More abt my Middle Tier term, but still connected. Meet bus need. @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect I used "No Assembly Required" in the past. I also suggested "Supplimenting the SharePoint DOM" @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect Others suggested Information Worker Solutions/Enhancements @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler I also like "SharePoint Scripting Solutions". All the technologies are script. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect I like the IW solutions one... @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler This is like the debate that never ends: it is definitely not called Middle Tier. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect @stevenmfowler "Scripting" these days makes me think PowerShell... @sympmarc:  @toddbleeker @jimmywim @mrackley @next_connect @stevenmfowler If it forces a debate on h2 best solve bus probs, I'll keep sayin Middle Tier. @usher:  @sympmarc so we know what we're looking for, we just can't define a name? @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler @sympmarc:  @usher @sympmarc @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler The naming seems to matter more than the substance. :-( @jimmywim:  @sympmarc @usher @toddbleeker @mrackley @next_connect @stevemfowler work brkdn defines tasks, defines tools needed, can then b grp'd by user @WonderLaura:  @mrackley @toddbleeker @jimmywim @sympmarc @usher @next_connect Funny you're asking. @johnrossjr and I spent hours this week on the subject. @stevenmfowler:  RT @toddbleeker: @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler it is definitely not called Middle Tier. < I'm with Todd

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

< Previous Page | 9 10 11 12 13 14  | Next Page >