Search Results

Search found 2757 results on 111 pages for 'webkit transform'.

Page 98/111 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • jQuery CSS Property Monitoring Plug-in updated

    - by Rick Strahl
    A few weeks back I had talked about the need to watch properties of an object and be able to take action when certain values changed. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects. One example is a drop shadow - if I add a shadow behavior to an object I want the shadow to be pinned to that object so when that object moves I also want the shadow to move with it, or when the panel is hidden the shadow should hide with it - automatically without having to explicitly hook up monitoring code to the panel. For example, in my shadow plug-in I can now do something like this (where el is the element that has the shadow attached and sh is the shadow): if (!exists) // if shadow was created el.watch("left,top,width,height,display", function() { if (el.is(":visible")) $(this).shadow(opt); // redraw else sh.hide(); }, 100, "_shadowMove"); The code now monitors several properties and if any of them change the provided function is called. So when the target object is moved or hidden or resized the watcher function is called and the shadow can be redrawn or hidden in the case of visibility going away. So if you run any of the following code: $("#box") .shadow() .draggable({ handle: ".blockheader" }); // drag around the box - shadow should follow // hide the box - shadow should disappear with box setTimeout(function() { $("#box").hide(); }, 4000); // show the box - shadow should come back too setTimeout(function() { $("#box").show(); }, 8000); This can be very handy functionality when you're dealing with objects or operations that you need to track generically and there are no native events for them. For example, with a generic shadow object that attaches itself to any another element there's no way that I know of to track whether the object has been moved or hidden either via some UI operation (like dragging) or via code. While some UI operations like jQuery.ui.draggable would allow events to fire when the mouse is moved nothing of the sort exists if you modify locations in code. Even tracking the object in drag mode this is hardly generic behavior - a generic shadow implementation can't know when dragging is hooked up. So the watcher provides an alternative that basically gives an Observer like pattern that notifies you when something you're interested in changes. In the watcher hookup code (in the shadow() plugin) above  a check is made if the object is visible and if it is the shadow is redrawn. Otherwise the shadow is hidden. The first parameter is a list of CSS properties to be monitored followed by the function that is called. The function called receives this as the element that's been changed and receives two parameters: The array of watched objects with their current values, plus an index to the object that caused the change function to fire. How does it work When I wrote it about this last time I started out with a simple timer that would poll for changes at a fixed interval with setInterval(). A few folks commented that there are is a DOM API - DOMAttrmodified in Mozilla and propertychange in IE that allow notification whenever any property changes which is much more efficient and smooth than the setInterval approach I used previously. On browser that support these events (FireFox and IE basically - WebKit has the DOMAttrModified event but it doesn't appear to work) the shadow effect is instant - no 'drag behind' of the shadow. Running on a browser that doesn't support still uses setInterval() and the shadow movement is slightly delayed which looks sloppy. There are a few additional changes to this code - it also supports monitoring multiple CSS properties now so a single object can monitor a host of CSS properties rather than one object per property which is easier to work with. For display purposes position, bounds and visibility will be common properties that are to be watched. Here's what the new version looks like: $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } There are basically two jQuery functions - watch and unwatch. jQuery.fn.watch(props,func,interval,id) Starts watching an element for changes in the properties specified. props The CSS properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func (watchData,index) The function fired in response to a changed property. Receives this as the element changed and object that represents the watched properties and their respective values. The first parameter is passed in this structure:    { id: itId, props: [], func: func, vals: [] }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property value that has changed. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. jQuery.fn.unwatch(id) Unhooks watching of the element by disconnecting the event handlers. id Optional watcher id that was specified in the call to watch. This value can be omitted to use the default value of _watcher. You can also grab the latest version of the  code for this plug-in as well as the shadow in the full library at: http://www.west-wind.com:8080/svn/jquery/trunk/jQueryControls/Resources/ww.jquery.js watcher has no other dependencies although it lives in this larger library. The shadow plug-in depends on watcher.© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • Bypass cache for mobile user agents, VARNISH+NGINX+W3CACHE

    - by Mike McGhee
    Right now I'm running Wordpress w/ W3 Cache on nginx with varnish front end. I'm trying to use the WP Touch Pro plugin for wordpress to display mobile sites, but it is not working. Shows the desktop theme still. I've put the mobile user agents in the rejected user agents box in w3 cache. Here is the nginx config w3 cache spit out: BEGIN W3TC Page Cache cache location ~ /wp-content/w3tc/pgcache.*html$ { expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; } location ~ /wp-content/w3tc/pgcache.*gzip$ { gzip off; types {} default_type text/html; expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; add_header Content-Encoding gzip; } # END W3TC Page Cache cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Page Cache core rewrite ^(.*\/)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 last; set $w3tc_rewrite 1; if ($request_method = POST) { set $w3tc_rewrite 0; } if ($query_string != "") { set $w3tc_rewrite 0; } if ($http_host != "mysite.com") { set $w3tc_rewrite 0; } set $w3tc_rewrite2 1; if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; } if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; } if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; } set $w3tc_rewrite3 1; if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|\/feed\/|wp-.*\.php|index\.php)") { set $w3tc_rewrite3 0; } if ($request_uri ~* "(wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $w3tc_rewrite3 1; } if ($w3tc_rewrite3 != 1) { set $w3tc_rewrite 0; } if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; } if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4|iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry9500|blackberry9520|blackberry9530|blackberry9550|blackberry\ 9800|blackberry\ 9780|webos|s8000|bada)") { set $w3tc_rewrite 0; } set $w3tc_ua ""; if ($http_user_agent ~* "(acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|blackberry\ 9800|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|kindle|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos)") { set $w3tc_ua _high; } set $w3tc_ref ""; set $w3tc_ssl ""; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; } set $w3tc_ext ""; if (-f "$document_root/wp-content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") { set $w3tc_ext .html; } if ($w3tc_ext = "") { set $w3tc_rewrite 0; } if ($w3tc_rewrite = 1) { rewrite .* "/wp- content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last; } # END W3TC Page Cache core And here is what I have in my varnish vcl.. sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Device detection set req.http.X-Device = "desktop"; if ( req.http.User-Agent ~ "iP(hone|od|ad)" || req.http.User-Agent ~ "Android" ) { set req.http.X-Device = "smart"; } elseif ( req.http.User-Agent ~ "(SymbianOS|BlackBerry|SonyEricsson|Nokia|SAMSUNG|^LG)" ) { set req.http.X-Device = "cell"; } Any help is greatly appreciated, I've been banging my head against this for 2 days..

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • CodePlex Daily Summary for Monday, October 08, 2012

    CodePlex Daily Summary for Monday, October 08, 2012Popular ReleasesSofire Suite v1.6: XSqlModelGenerator.AddIn: 1、?? VS2010/2012 2、?? .NET FRAMEWORK 2.0 3、?? SOFIRE V1.6Sofire XSql: XSqlFormDemo: ???? MSSQL ???????ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Picturethrill: Version 2.10.7.0: Major FeaturesWindows 8 Support!!! Picturethrill is fully tested on latest Windows 8. Try it now.New AboutPicturethrill Dialog Available from "Advanced" window. Describes Version, Developers and other infoFull Installation Information in Control Panel Picturethrill in ControlPanel has full information, including Version and Size. Minor Bug FixesImproved Logging AppDomain Unhandled Exception Logging Delete WallpaperDownload Folder (Saves Diskspace) Advanced Settings "Close" button bug ...VidCoder: 1.4.3 Beta: Fixed crash on switching container types. Updated subtitle selection to only allow one PGS subtitle to be selected under an MP4 container as PGS passthrough in MP4 is not supported and the sub must be burned in.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.68: Fixes for issues: 17110 - added support for -moz-calc or -webkit-calc to previous fix for calc support. 18652 - if being run under Mono runtime, don't mistake UNIX-style paths for command-line switches. 18659 - added capability to specify JS settings for CSS files that have JS embedded in expressions. Did a lot of internal house-keeping: add DLL support for -pponly switch. Add support for -js:expr, -js:evt, and -js:json to allow easy parsing for expressions, event handlers, and JSON code...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.JSLint for Visual Studio 2010: 1.4.2: 1.4.2GPdotNET - artificial intelligence tool: GPdotNET v2 BETA3: This is hopefully the last beta before final version.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.02: changes NEW: Added Support Big Naturals Only forum NEW: Added Setting to enable/disable "Show last download image"patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y....NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...MCEBuddy 2.x: MCEBuddy 2.3.1: 2.3.1All new Remote Client Server architecture. Reccomended Download. The Remote Client Installation is OPTIONAL, you can extract the files from the zip archive into a local folder and run MCEBuddy.GUI directly. 2.2.15 was the last standalone release. Changelog for 2.3.1 (32bit and 64bit) 1. All remote MCEBuddy Client Server architecture (GUI runs remotely/independently from engine now) 2. Fixed bug in Audio Offset 3. Added support for remote MediaInfo (right click on file in queue to get ...D3 Loot Tracker: 1.5: Support for session upload to website. Support for theme change through general settings. Time played counter will now also display a count for days. Tome of secrets are no longer logged as items.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...DirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...New Projectsamplifi: This project is still under construction. We will add more information here as soon as it is available.autoclubpigue: Proyecto para el auto club de piguecoiffeurprj: coiffeurprjerutufym: erutufym is gnol eht gge.Express AOP: A .NET AOP ToolsFile Slurpee: The purpouse of this application is to help security professionals during their penetration tests. Given a configuration this application will scour the target Guardian - Google Authenticator: Guardian is a windows client application for Google Authenticator - a software based two-factor authentication token developed by Google.ManagedCuda Galaxy Simulator: This project is a test of ManagedCuda and graphics interop to OpenTK to simulate a simple galaxy on the GPU.mostafanote: ????? ???? ??????? ?? ???? ?? ?? ?? ? ?? ??????? ?? ??????? ??? ?? ??? ?? ?? ???????? ??????? ??? ??? ?? ???? ?????? NXT Controller: A simple controller for the LEGO Mindstorms NXTPowerShell Security: PoshSec is short for PowerShell security, a module provided to allow PowerShell users testing, analysis and reporting on Window securityRevenant Raiders - Sunstrider: Tool to provide information about Revenant Raiders - Sunstrider.Run: animal runSharedDeploy: Simple ASP.MVC app , that use 2 types of view : mobile and desktop. In core WebApi, jQuery, jQuery.mobileSkyCoffee: Cafeteria ApplicationSOSA Analysis Module: Analysis program for data from SOSA psychological experiment software.SQL Connector: SQL Connector is a .DLL file that make it easy to comunicate with SQL Server Works in : Windows Desktop Applications ASP.Net Web (MVC - WEB Services) Windows cSQL Job Scripter: SQL Job Scripter is a command line utility that produces scripts of SQL Agent jobs. It will script either to a single file or to one file per job. SQL Server Watcher: A tool for collecting and monitoring information about sql server.Team Build Inspector: An add-in for Visual Studio Team Explorer that monitors the status of build definitions by its latest build, latest good build & underlying source code status.TED Talk Download Manager: "TED Talk Download Manager" provides a central point in which to download multiple TED talks, while keeping track of any talk previously downloaded.Temp project mycollection: My collection temporary projectUnreal Unit Converter: A Simple UDK <-> 3DS Max <-> Meters Unit Converter - with some other stuff.Watermelon Site: This Site is The site of Watermelon Inc. In This Site: Downloads News about Watermelon And more... WiX Extended Bootstrapper Application: An extended WiX bootstrapper Application based on the WiX standard bootstrapper application.Wrox NHibernate with ASP.NET Problem-Design-Solution -C# version: sample code of Wrox NHibernate with ASP.NET Problem-Design-Solution (http://nhibernateasp.codeplex.com/) in C# which is done as a practice

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Knowing your user is key--Part 1: Motivation

    - by erikanollwebb
    I was thinking where the best place to start in this blog would be and finally came back to a theme that I think is pretty critical--successful gamification in the enterprise comes down to knowing your user.  Lots of folks will say that gamification is about understanding that everyone is a gamer.  But at least in my org, that argument won't play for a lot of people.  Pun intentional.  It's not that I don't see the attraction to the idea--really, very few people play no games at all.  If they don't play video games, they might play solitaire on their computer.  They may play card games, or some type of sport.  Mario Herger has some great facts on how much game playing there is going on at his Enterprise-Gamification.com website. But at the end of the day, I can't sell that into my organization well.  We are Oracle.  We make big, serious software designed run your whole business.  We don't make Angry Birds out of your financial reporting tools.  So I stick with the argument that works better.  Gamification techniques are really just good principals of user experience packaged a little differently.  Feedback?  We already know feedback is important when using software.  Progress indicators?  Got that too.  Game mechanics may package things in a more explicit way but it's not really "new".  To know how to use game mechanics, and what a user experience team is important for, is totally understanding who our users are and what they are motivated by. For several years, I taught college psychology courses, including Motivation.  Motivation is generally broken down into intrinsic and extrinsic motivation.  There's intrinsic, which comes from within the individual.  And there's extrinsic, which comes from outside the individual.  Intrinsic motivation is that motivation that comes from just a general sense of pleasure in the doing of something.  For example, I like to cook.  I like to cook a lot.  The kind of cooking I think is just fun makes other people--people who don't like to cook--cringe.  Like the cake I made this week--the star-spangled rhapsody from The Cake Bible: two layers of meringue, two layers of genoise flavored with a raspberry eau de vie syrup, whipped cream with berries and a mousseline buttercream, also flavored with raspberry liqueur and topped with fresh raspberries and blueberries. I love cooking--I ask for cooking tools for my birthday and Christmas, I take classes like sushi making and knife skills for fun.  I like reading about you can make an emulsion of egg yolks, melted butter and lemon, cook slowly and transform them into a sauce hollandaise (my use of all the egg yolks that didn't go into the aforementioned cake).  And while it's nice when people like what I cook, I don't do it for that.  I do it because I think it's fun.  My former boss, Ultan Ó Broin, loves to fish in the sea off the coast of Ireland.  Not because he gets prizes for it, or awards, but because it's fun.  To quote a note he sent me today when I asked if having been recently ill kept him from the beginning of mackerel season, he told me he had already been out and said "I can fish when on a deathbed" (read more of Ultan's work, see his blogs on User Assistance and Translation.). That's not the kind of intensity you get about something you don't like to do.  I'm sure you can think of something you do just because you like it. So how does that relate to gamification?  Gamification in the enterprise space is about uncovering the game within work.  Gamification is about tapping into things people already find motivating.  But to do that, you need to know what that user is motivated by. Customer Relationship Management (CRM) is one of those areas where over-the-top gamification seems to work (not to plug a competitor in this space, but you can search on what Bunchball* has done with a company just a little north of us on 101 for the CRM crowd).  Sales people are naturally competitive and thrive on that plus recognition of their sales work.  You can use lots of game mechanics like leaderboards and challenges and scorecards with this type of user and they love it.  Show my whole org I'm leading in sales for the quarter?  Bring it on!  However, take the average accountant and show how much general ledger activity they have done in the last week and expose it to their whole org on a leaderboard and I think you'd see a lot of people looking for a new job.  Why?  Because in general, accountants aren't extraverts who thrive on competition in their work.  That doesn't mean there aren't game mechanics that would work for them, but they won't be the same game mechanics that work for sales people.  It's a different type of user and they are motivated by different things. To break this up, I'll stop here and post now.  I'll pick this thread up in the next post. Thoughts? Questions? *Disclosure: To my knowledge, Oracle has no relationship with Bunchball at this point in time.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • window.focus() not working in Google Chrome

    - by nickyt
    Hey folks, Just wondering if Google Chrome is going to support window.focus() at some point. When I mean support, I mean have it work. The call to it doesn't fail, it just doesn't do anything. All other major browsers do not have this problem: FireFox, IE6-IE8 and Safari. I have a client-side class for managing browser windows. When I first create a window the window comes into focus, but subsequent attempts to bring focus to the window do not work. From what I can tell, this appears to be a security feature to avoid annoying pop-ups and it does not appear to be a WebKit issue as it works in Safari. I know one idea someone brought forward was to close the window then reopen it, but this is a horrible solution. Googling shows that I do not appear to be the only person frustrated with this. And just to be 100% clear, I mean new windows, not tabs (tabs cannot be focused from what I've read) and all the windows being opened are in the same domain. Any ideas? MyCompany = { UI: {} }; // Put this here if you want to test the code. I create these namespaces elsewhere in code. MyCompany.UI.Window = new function() { // Private fields var that = this; var windowHandles = {}; // Public Members this.windowExists = function(windowTarget) { return windowTarget && windowHandles[windowTarget] && !windowHandles[windowTarget].closed; } this.open = function(url, windowTarget, windowProperties) { // See if we have a window handle and if it's closed or not. if (that.windowExists(windowTarget)) { // We still have our window object so let's check if the URLs is the same as the one we're trying to load. var currentLocation = windowHandles[windowTarget].location; if ( ( /^http(?:s?):/.test(url) && currentLocation.href !== url ) || ( // This check is required because the URL might be the same, but absolute, // e.g. /Default.aspx ... instead of http://localhost/Default.aspx ... !/^http(?:s?):/.test(url) && (currentLocation.pathname + currentLocation.search + currentLocation.hash) !== url ) ) { // Not the same URL, so load the new one. windowHandles[windowTarget].location = url; } // Give focus to the window. This works in IE 6/7/8, FireFox, Safari but not Chrome. // Well in Chrome it works the first time, but subsequent focus attempts fail,. I believe this is a security feature in Chrome to avoid annoying popups. windowHandles[windowTarget].focus(); } else { // Need to do this so that tabbed browsers (pretty much all browsers except IE6) actually open a new window // as opposed to a tab. By specifying at least one window property, we're guaranteed to have a new window created instead // of a tab. windowProperties = windowProperties || 'menubar=yes,location=yes,width=700, height=400, scrollbars=yes, resizable= yes'; windowTarget = windowTarget || "_blank"; // Create a new window. var windowHandle = windowProperties ? window.open(url, windowTarget, windowProperties) : window.open(url, windowTarget); if (null === windowHandle) { alert("You have a popup blocker enabled. Please allow popups for " + location.protocol + "//" + location.host); } else { if ("_blank" !== windowTarget) { // Store the window handle for reuse if a handle was specified. windowHandles[windowTarget] = windowHandle; windowHandles[windowTarget].focus(); } } } } }

    Read the article

  • Weird duplicate IE div.

    - by Kyle Sevenoaks
    Here's a weird bug I've found, IE8 is duplicating my div, but only a part of it. How it looks in IE8: And here's how it's meant to look in FF: And the HTML: <div id="roundbigbox"> <p id="pro">Produkter</p> <table id="cart"> <div id="titles"> <div id="thinlinecopy"></div> <div id="varekodetext"> <p>Varekode</p> </div> <div id="produkttext"> <p>Produkt</p> </div> <div id="pristext"> <p>Pris</p> </div> <div id="antalltext"> <p>Antall</p> </div> <div id="pristotaltext"> <p>Pris total</p> </div> <div id="sletttext"> <p>Slett</p></div> <div id="thinline"></div> </div> ...content... <div class="delete"> <a id="slett" href="/order/delete/1329?return=" title="Slett"><!--Slett--></a> </div> </div> CSS for FF: d iv #roundbigbox { background-image:url(../../upload/EW_p_og_L.png); background-position:top center; background-repeat:no-repeat; padding:5px; padding-top:10px; padding-bottom:0px; width:760px; height:1%; border-width:1px; border-color:#dddddd; border-radius:10px; -moz-border-radius:10px; -webkit-border-radius:10px; z-index:1; position:relative; overflow:hidden; margin:0; margin-bottom:10px; } CSS for IE: div #roundbigbox { background-image:url(../../upload/EW_p_og_L.png); background-position:top center; background-repeat:no-repeat; padding:5px; padding-right:50px; padding-top:10px; padding-bottom:-20px; width:760px; height:1%; border-width:1px; border-color:#dddddd; z-index:1; position:relative; overflow:hidden; margin:0; margin-bottom:10px; } What could cause such a weird bug? It's not duplicated in the HTML. I am stumped! Thanks for any responses.

    Read the article

  • IE positioning problems

    - by Kyle Sevenoaks
    In every browser but IE, on euroworker.no/order the little green arrow under the word "produkt" sits on top of my div container. Why in the world does this not work in IE? Thing is, it works on two pages out of four in IE but all four in other browsers. CSS for the top prgress indicator: #checkoutProgress { width: auto; padding-top: 1em; height: 30px; overflow:hidden; font-family: "Helvetica"; font-size:18px; float:left; /* margin-bottom:22px;*/ margin-left:0px; } #checkoutProgress a { padding: 10px; /*border-width: 2px; margin-right: 20px;*/ text-decoration:none; font-size: 17.26px; color:#dadada; text-transform:uppercase; } #checkoutProgress a:hover { padding: 10px; /*border-width: 2px; margin-right: 20px;*/ text-decoration:none; font-size: 17.26px; color:#818072; } /* completed steps */ #checkoutProgress a.completed { border-color: #70D66D; } /* current step */ #checkoutProgress a.active { /* border-color: #ADD8E6;*/ font-weight: bold; /*background-color: #fffccc; border-color: #ADD8E6;*/ background-image:url(../../upload/urhere_arr.png); background-position:bottom center; /*padding-left:15px;*/ color:#a3a398; } For the box: div #roundbigbox { background-image:url(../../upload/EW_p_og_L.png); background-position:top center; background-repeat:no-repeat; padding:5px; padding-top:10px; padding-bottom:0px; width:760px; height:1%; border-width:1px; border-color:#dddddd; border-radius:10px; -moz-border-radius:10px; -webkit-border-radius:10px; z-index:1; position:relative; overflow:hidden; margin:0; margin-bottom:10px; } fieldset css: fieldset.container { border: 0; } And some HTML: <fieldset class="container"> <div id="checkoutProgress" class="progressCart"> <a href="/order" class=" active" id="progressCart"><span>Produkt</span></span></a> <a href="/checkout/selectAddress" class="completed " id="progressAddress"><span>kunde info</span></a> <a href="/checkout/shipping" class="completed " id="progressShipping"><span>Leveringsmåte</span></a> <a href="/checkout/pay" class="" id="progressPayment"><span>Betaling & Fullfør</span><</a> </div> </fieldset> </div> <form action="/order... > <input type="hidden"...> <div id="roundbigbox"> <p id="pro">Produkter</p> More content </div>

    Read the article

  • Qt 4.6 Adding objects and sub-objects to QWebView window object (C++ & Javascript)

    - by Cor
    I am working with Qt's QWebView, and have been finding lots of great uses for adding to the webkit window object. One thing I would like to do is nested objects... for instance: in Javascript I can... var api = new Object; api.os = new Object; api.os.foo = function(){} api.window = new Object(); api.window.bar = function(){} obviously in most cases this would be done through a more OO js-framework. This results in a tidy structure of: >>>api ------------------------------------------------------- - api Object {os=Object, more... } - os Object {} foo function() - win Object {} bar function() ------------------------------------------------------- Right now I'm able to extend the window object with all of the qtC++ methods and signals I need, but they all have 'seem' to have to be in a root child of "window". This is forcing me to write a js wrapper object to get the hierarchy that I want in the DOM. >>>api ------------------------------------------------------- - api Object {os=function, more... } - os_foo function() - win_bar function() ------------------------------------------------------- This is a pretty simplified example... I want objects for parameters, etc... Does anyone know of a way to pass an child object with the object that extends the WebFrame's window object? Here's some example code of how I'm adding the object: mainwindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QtGui/QMainWindow> #include <QWebFrame> #include "mainwindow.h" #include "happyapi.h" class QWebView; class QWebFrame; QT_BEGIN_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = 0); private slots: void attachWindowObject(); void bluesBros(); private: QWebView *view; HappyApi *api; QWebFrame *frame; }; #endif // MAINWINDOW_H mainwindow.cpp #include <QDebug> #include <QtGui> #include <QWebView> #include <QWebPage> #include "mainwindow.h" #include "happyapi.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) { view = new QWebView(this); view->load(QUrl("file:///Q:/example.htm")); api = new HappyApi(this); QWebPage *page = view->page(); frame = page->mainFrame(); attachWindowObject(); connect(frame, SIGNAL(javaScriptWindowObjectCleared()), this, SLOT(attachWindowObject())); connect(api, SIGNAL(win_bar()), this, SLOT(bluesBros())); setCentralWidget(view); }; void MainWindow::attachWindowObject() { frame->addToJavaScriptWindowObject(QString("api"), api); }; void MainWindow::bluesBros() { qDebug() << "foo and bar are getting the band back together!"; }; happyapi.h #ifndef HAPPYAPI_H #define HAPPYAPI_H #include <QObject> class HappyApi : public QObject { Q_OBJECT public: HappyApi(QObject *parent); public slots: void os_foo(); signals: void win_bar(); }; #endif // HAPPYAPI_H happyapi.cpp #include <QDebug> #include "happyapi.h" HappyApi::HappyApi(QObject *parent) : QObject(parent) { }; void HappyApi::os_foo() { qDebug() << "foo called, it want's it's bar back"; }; I'm reasonably new to C++ programming (coming from a web and python background). Hopefully this example will serve to not only help other new users, but be something interesting for a more experienced c++ programmer to elaborate on. Thanks for any assistance that can be provided. :)

    Read the article

  • CSS: Labels in table columns

    - by hello
    Hello. BACKGROUND: I would like to have small labels in columns of a table. I'm using some implemented parts of HTML5/CSS3 in my project, and this section specifically is for mobile devices. While both facts are not necessarily relevant, the bottom line is that I don't have to support Internet Explorer or even Firefox for that matter (just WebKit). THE PROBLEM With my current CSS approach, the vertical padding of the cell comes from the <span element (set to display: block with top/bottom margins), which contains the "value" of the column. As a result there's no padding when the <span> is empty or missing (no value) and the label is not in place. The "full" coulmns should give you the idea of where I want the labels to be, even if there's no value, and the <span> is not there. I realize that I could use "non-breaking-space", but I would really like to avoid it. I wonder if any of you have a fix / better way to do this? current code is below. Thank you for any help. <!DOCTYPE html> <html lang="en"> <head> <title>ah</title> <style> body { width: 320px; } /* TABLE */ table { width: 100%; border-collapse: collapse; font-family: arial; } th, td { border: 1px solid #ccc; border-width: 0px 0px 1px 1px; } th:last-child, td:last-child { border-right-width: 1px; } tr:first-child th { border-top-width: 1px; background: #efefef; } /* RELEVANT STUFF */ td { padding: 3px; } td sup { display: block; } td span { display: block; margin: 3px 0px; text-align: center; } </style> </head> <body> <table> <tr> <th colspan="3">something</th> </tr> <tr> <td><sup>some label</sup><span>any content</span></td> <td><sup>some label</sup><span>any content</span></td> <td><sup>some label</sup><span></span></td><!-- No content, just a label --> </tr> </table> </body> </html>

    Read the article

  • How can I get Firefox to update background-color on a:hover *before* a javascript routine is run?

    - by Rob
    I'm having a Firefox-specific issue with a script I wrote to create 3d layouts. The correct behavior is that the script pulls the background-color from an element and then uses that color to draw on the canvas. When a user mouses over a link and the background-color changes to the :hover rule, the color being drawn changes on the canvas changes as well. When the user mouses out, the color should revert back to non-hover color. This works as expected in Webkit browsers and Opera, but it seems like Firefox doesn't update the background-color in CSS immediately after a mouseout event occurs, so the current background-color doesn't get drawn if a mouseout occurs and it isn't followed up by another event that calls the draw() routine. It works just fine in Opera, Chrome, and Safari. How can I get Firefox to cooperate? I'm including the code that I believe is most relevant to my problem. Any advice on how I fix this problem and get a consistent effect would be very helpful. function drawFace(coord, mid, popColor,gs,x1,x2,side) { /*Gradients in our case run either up/down or left right. We have two algorithms depending on whether or not it's a sideways facing piece. Rather than parse the "rgb(r,g,b)" string(popColor) retrieved from elsewhere, it is simply offset with the gs variable to give the illusion that it starts at a darker color.*/ var canvas = document.getElementById('depth'); //This is for excanvas.js var G_vmlCanvasManager; if (G_vmlCanvasManager != undefined) { // ie IE G_vmlCanvasManager.initElement(canvas); } //Init canvas if (canvas.getContext) { var ctx = canvas.getContext('2d'); if (side) var lineargradient=ctx.createLinearGradient(coord[x1][0]+gs,mid[1],mid[0],mid[1]); else var lineargradient=ctx.createLinearGradient(coord[0][0],coord[2][1]+gs,coord[0][0],mid[1]); lineargradient.addColorStop(0,popColor); lineargradient.addColorStop(1,'black'); ctx.fillStyle=lineargradient; ctx.beginPath(); //Draw from one corner to the midpoint, then to the other corner, //and apply a stroke and a fill. ctx.moveTo(coord[x1][0],coord[x1][1]); ctx.lineTo(mid[0],mid[1]); ctx.lineTo(coord[x2][0],coord[x2][1]); ctx.stroke(); ctx.fill(); } } function draw(e) { var arr = new Array() var i = 0; var mid = new Array(2); $(".pop").each(function() { mid[0]=Math.round($(document).width()/2); mid[1]=Math.round($(document).height()/2); arr[arr.length++]=new getElemProperties(this,mid); i++; }); arr.sort(sortByDistance); clearCanvas(); for (a=0;a<i;a++) { /*In the following conditional statements, we're testing to see which direction faces should be drawn, based on a 1-point perspective drawn from the midpoint. In the first statement, we're testing to see if the lower-left hand corner coord[3] is higher on the screen than the midpoint. If so, we set it's gradient starting position to start at a point in space 60pixels higher(-60) than the actual side, and we also declare which corners make up our face, in this case the lower two corners, coord[3], and coord[2].*/ if (arr[a].bottomFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,3,2); if (arr[a].topFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,1); if (arr[a].leftFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,3,true); if (arr[a].rightFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,1,2,true); } } $("a.pop").bind("mouseenter mouseleave focusin focusout",draw); If you need to see the effect in action, or if you want the full javascript code, you can check it out here: http://www.robnixondesigns.com/strangematter/

    Read the article

  • Cordova - Scrolling with a fixed header and footer (ios)

    - by Samu Singh
    Using Cordova (phonegap) & bootstrap to make a mobile application, testing on IOS for now. Getting an issue with a header and footer bar that are fixed with scrollable content in the middle. When tapping to scroll, the header/footer bar moves down or up with the content but then snaps back to place as soon as the scrolling completes. If I use -webkit-overflow-scrolling: touch; it works as expected, but it makes it really awkward to scroll through the content, and if you scroll passed the end, it only scrolls the header or footer (with elastic overflow) until you stop for a second. here's my html for the header/footer bars: <div id="headerBar"> <div class="container-fluid" style="background-color: #1569C7"> <div class="row"> <div class="col-xs-3 text-left"> <button id="logoutButton" type="button" class="btn btn-default"> Log Out </button> <button type="button" class="btn btn-default" id="restoreQuestionFeedButton"> <span class="glyphicon glyphicon-chevron-left"></span> </button> </div> <div class="col-xs-6 text-center" style="height: 55px"> <strong id="usernameText"></strong> </div> <div class="col-xs-3 text-right"> <button id="oldCreatQuestionButton" type="button" class="btn btn-default"> <span class="glyphicon glyphicon-plus"></span> </button> </div> </div> </div> </div> <div id="footerBar"> <div class="container-fluid" style="padding: 0"> <div class="row text-center"> <button id="createQuestionButton" type="button" class="btn btn-default footerButton"> <span class="glyphicon glyphicon-plus"></span> <strong>Ask a new free question!</strong> </button> </div> </div> </div> And here is the related CSS: #headerBar { position: fixed; z-index: 100; top: 0; left: 0; width: 100%; background-color: #1569C7; } #footerBar { position: fixed; z-index: 100; bottom: 0; left: 0; width: 100%; background-color: #1569C7 !important; }

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >