Search Results

Search found 17157 results on 687 pages for 'cloud services'.

Page 259/687 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Windows Azure XDrive

    - by kaleidoscope
    This allows your Windows Azure compute applications running in our cloud to use the existing NTFS APIs to store their data in a durable drive. The drive is backed by a Windows Azure Page Blob formatted as a single NTFS volume VHD.   The Page Blob can be mounted as a drive within the Windows Azure cloud, where all non-buffered/flushed NTFS writes are made durable to the drive (Page Blob).   If the application using the drive crashes, the data is kept persistent via the Page Blob, and can be remounted when the application instance is restarted or remounted elsewhere for a different application instance to use.   Since the drive is an NTFS formatted Page Blob, you can also use the standard blob interfaces to uploaded and download your NTFS VHDs to the cloud. More details can be found at: http://microsoftpdc.com/Sessions/SVC14 Anish, S

    Read the article

  • New Whitepaper: Deploying E-Business Suite on Exadata and Exalogic

    - by Elke Phelps (Oracle Development)
    Our E-Business Suite Performance Team recently published a new whitepaper to assist you with deploying E-Business Suite on the Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine , also referred to as Exastack.  If you are considering a migration to Exastack, this new whitepaper will assist you understanding sizing requirements, deployment standards and migration strategies: Deploying Oracle E-Business Suite on Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine (Note 1460742.1) This whitepaper covers the following topics: Scalability and Sizing Examples - provides performance benchmark analysis with concurrent user counts, scaling analysis and sizing recommendations Deployment Standards - includes recommendations for deploying the various components of the E-Business Suite architecture on Exastack Migration Standards and Guidelines - includes an overview of methods for migrating from commodity hardware to Exastack References Our Maximum Availability Architecture (MAA) team has a number of whitepapers that provide additional information regarding Oracle E-Business Suite on the Oracle Exadata Database Machine.  Their library of whitepapers may be found here: MAA Best Practices - Oracle Applications Unlimited  Related Articles Running E-Business Suite on Exadata V2 Running Oracle E-Business Suite on Exalogic Elastic Cloud

    Read the article

  • invite: Oracle Fusion Applications Partner Update Webcast

    - by mseika
    Oracle Fusion Applications: Thursday's Partner UpdatesIn order to keep you up to date with partner-specific news and information regarding Oracle Fusion Applications, we are expanding our Fusion Applications Webcast Series to include these additional Thursday sessions.All sessions will be recorded and replays will be posted to this Oracle PartnerNetwork page.Please mark your calendar for these NEW Fusion Partner Update specific sessions: Click Here for logistics and dial-in details for each webcast. 11/29/12 Win Cloud SFA with Fusion CRM: Sales Positioning 12/6/12 Win Cloud SFA with Fusion CRM: Fusion CRM against SFDC 12/13/12 Implementing Fusion Applications: ERP Cloud Services, Back Office Solutions that Keep You in Front 12/20/12 Understanding Fusion Supply Chain Management (SCM) Opportunities PLEASE NOTE: This webcast series is for Oracle Partners and Oracle Employees ONLY.

    Read the article

  • WebLogic Server 12c Launch Event - 1 December 2011

    - by Chuck Speaks
    Introducing Oracle WebLogic Server 12c, the #1 Application Server Across Conventional and Cloud Environments Please join Hasan Rizvi on December 1, as he unveils the next generation of the industry’s #1 application server and cornerstone of Oracle’s cloud application foundation—Oracle WebLogic Server 12c. Hear, with your fellow IT managers, architects, and developers, how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Applications Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Don't miss this online launch event. Register now.

    Read the article

  • Oracle Fusion Applications Partner Update Webcasts

    - by Richard Lefebvre
    Every Thursday from November 29th - December 20th! In order to keep you up to date with partner-specific news and information regarding Oracle Fusion Applications, we are expanding our Fusion Applications Webcast Series to include these additional Thursday sessions. All sessions will be recorded and replays will be posted to this Oracle PartnerNetwork page. Please mark your calendar for these NEW Fusion Partner Update specific sessions: Click Here for logistics and dial-in details for each webcast. 11/29/12 Win Cloud SFA with Fusion CRM: Sales Positioning 12/6/12 Win Cloud SFA with Fusion CRM: Fusion CRM against SFDC 12/13/12 Implementing Fusion Applications: ERP Cloud Services, Back Office Solutions that Keep You in Front 12/20/12 Understanding Fusion Supply Chain Management (SCM) Opportunities PLEASE NOTE: This webcast series is for Oracle Partners and Oracle Employees ONLY.

    Read the article

  • Learn about CRM and CX at Oracle Days 2012

    - by Richard Lefebvre
    Oracle Day 2012 features learning tracks and sessions tailored for accelerating your business in today’s environment. Oracle simplifies IT by investing in best-of-breed technologies at every layer of the technology stack and engineering them to work together so you can focus on driving your business forward. Throughout its history, Oracle has proved it can address the most complex IT challenges and solve the business problems of our customers. Discover Oracle’s strategy for powering innovation in the areas of Cloud, Social, Mobile, Business Operations, Data Center Optimization, Big Data and Analytics. Oracle Day 2012: Tracks     Engine for Growth: The business for optimized data center Powering innovation for your enterprise applications Architect your cloud: A blueprint for Cloud builders See more, Act faster: powering innovation with analytics Business operations: Powering business innovation Customer Experience: Empowering people, powering brands Check out the agenda at local even for more details

    Read the article

  • AWS CloudFormations, Oracle Assembly Builder, Chef and Puppet

    - by llaszews
    I blogged about the difference and similarities between AWS CloudFormations and Oracle Assembler builder to package your software stack for deployment/provisioning to the cloud. However, these tools do not deal with software stack versioning and configuration management. This is where tools like Chef and Puppet come into play. Puppet and Chef points of interest: 1. Can be used in any cloud environment (rackspace, private cloud etc). 2. There is a debate between which is better. I am not going to get into this debate other then to say Puppet is more mature. 3. AWS CloudFormations can integration with both Chef and Puppet. A good blog on AWS CloudFormations and the need for something more: AWS CloudFormation

    Read the article

  • On-Premises to Office 365: Identity

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information “Run your business, not your mail server.” I am not sure where I read that, but it makes so much sense! Every organization is moving to the cloud, and some just haven’t started their journey yet. One of the fastest and most compelling online cloud based offerings is Office 365. Available in various SKUs, you can get SharePoint, Lync, Exchange, and Office professional as cloud-based offerings. The subscriptions are as low as $2 per user per month to $20 something per user per month. Also, with SharePoint 2013, if you buy Office 365 subscriptions for your users, you don’t need to buy CALs (Client Access Licenses) for on-premises use. Read full article here. Read full article ....

    Read the article

  • Live Webcast: Introducing Oracle Identity Management 11gR2

    - by B Shashikumar
    Please join Oracle and customer executives for the launch of Oracle Identity Management 11g R2, the breakthrough technology that dramatically expands the reach of identity management to cloud and mobile environments. Until now, businesses have been forced to piece together different kinds of technology to get comprehensive identity protection. The latest release of Oracle Identity Management 11g changes all that. Only Oracle Identity Management 11gR2 allows you to: Unlock the potential of cloud, mobile, and social applications Streamline regulatory compliance and reduce risk Improve quality of service and end user satisfaction Don't leave your identity at the office. Take it with you on your phone, in the cloud, and across the social world. Register now for the interactive launch Webcast and don’t miss this chance to have your questions answered by Oracle product experts.Date: Thursday, July 19, 2012 Time: 10am Pacific / 1pm Eastern

    Read the article

  • Windows Azure : le DevCamp 100 % en ligne aujourd'hui de 15h à 19h, des goodies à gagner et 30 jours d'essai gratuit à la plateforme

    Windows Azure : un DevCamp 100 % en ligne le 1er juillet De 15h à 19h, des goodies à gagner, 30 jours d'essai gratuit à Azure et une remise de 150 € pendant cet essaiMicrosoft organise un événement 100 % en ligne autour de Windows Azure le 1er juillet prochain.Ce « DevCamp » sur la plateforme Cloud pour les développeurs se déroulera de 15h à 19h (mais il sera bien sûr possible de la rejoindre à tout moment).Au total, ce sont 8 sessions de 30 minutes chacune qui aborderont des thèmes aussi variés que la BI en mode Cloud (avec SharePoint), la gestion d'identité dans Azure et Office365, l'administration d'une infrastructure Cloud hybride, le Big Data ou les Backend pour les applis multidevice.

    Read the article

  • Does any CMS natively support something like aquaBrowser/Vufind

    - by nus
    What I'm looking for is to set up a CMS website with a tag cloud/search system where when you click a tag it shows as a search filter, and you get a new tag cloud which only shows tags from the articles that have the primary tag. This should let the user easily include or exclude tags from the search. Also i would like to let the user filter on publication date by using a slider... Check this for an example of how aquabrowser does this (just search for something and play with the tag cloud): http://kidscatalog.columbuslibrary.org It's not perfectly what I want and I don't want to use flash, but I like the concept alot. If something like this does not exist and i might have to code it myself, which cms is most recommended (eg easy to extend, already has tag clouds and search filters,...)

    Read the article

  • Microsoft Releases Windows Server 2012

    Windows Server 2012 offers expanded virtualization capabilities and works with Windows Azure, the software company's cloud platform. It can deliver more than 200 public, private and hybrid cloud services. The goal seems to be to deliver any application on any cloud. Rand Morimoto, president of Microsoft partner Convergent Computing, sees a number of major selling points for Windows Server 2012. For instance, its deduplication features can save a company 40 to 50 percent in storage space. Its automated IT management capabilities enable it to manage a large number of virtual machines. It can eve...

    Read the article

  • how to use version control

    - by ptamzz
    I'm developing a web site in php in localhost and as modules of it gets completed, I upload it on the cloud so that my friends can alpha test it. As I keep developing, I've lots of files and I lose track of which file I've edited or changed etc. I've heard of something as 'version control' to manage all those but am not sure how it works. So, my question is: Is there an easy way/service/application available to me to track all the edits/changes/new files and manage the files as I develop the website. As Soon as I'm done with a module, I want to upload it on the cloud (I'm using Amazon Cloud Service). If something happens to the new files, I might want to get back to the old file. And maybe, in a click or two, I get to see the files which I've edited or changed since the last one I've uploaded?

    Read the article

  • Windows Azure : un DevCamp 100 % en ligne le 1er juillet de 15h à 19h, des goodies à gagner et toujours 30 jours d'essai gratuit à la plateforme

    Windows Azure : un DevCamp 100 % en ligne le 1er juillet De 15h à 19h, des goodies à gagner, 30 jours d'essai gratuit à Azure et une remise de 150 € pendant cet essaiMicrosoft organise un événement 100 % en ligne autour de Windows Azure le 1er juillet prochain.Ce « DevCamp » sur la plateforme Cloud pour les développeurs se déroulera de 15h à 19h (mais il sera bien sûr possible de la rejoindre à tout moment).Au total, ce sont 8 sessions de 30 minutes chacune qui aborderont des thèmes aussi variés que la BI en mode Cloud (avec SharePoint), la gestion d'identité dans Azure et Office365, l'administration d'une infrastructure Cloud hybride, le Big Data ou les Backend pour les applis multidevice.

    Read the article

  • Oracle Enterprise Manager 12c(EM12c):????????? ~??????~

    - by Kumiko Fujita
    ?????? = ?????????????????? Oracle Enterprise Manager 12c?????????????????????????????????????????????????????????????????????????????·????????????Infrastructure as a Service(IaaS)???Database as a Service(DBaaS)????????????????? ?????? ??????? 1. Consolidation Planner -??????????????????????!- Consolidation Planner?????????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Consolidation Planner???????????????????????????????????????????????????????????? 2. Cloud Management(Oracle VM??) -???????????????????????!- ??????·???????????????????????????????????????????????????????????????/???????????????????????????????????Cloud Management??????????????????????????Oracle VM 3.0????????????????????????????·?????????????????????????????????????????????????????????????????????????·?????????????????????? 3. Chargeback and Trending -?????????????????????????!- ??????·????????????????????????????????????????????????????????????Chargeback and Trending???????????????????????????Oracle WebLogic Server???????????????????????????????????????????????????????????????????????CPU????/?????????????????????????????????????????????????????????????????????????????????????????????????? ??????! ?????Cloud Management?(PDF) ?????????(????????????????) WMV(??) WMV(??) MP4(??) MP4(??)

    Read the article

  • ???????Oracle Exadata??????????????·???????

    - by mamoru.kobayashi
    ???????????????????????????Oracle Exadata?????? ??????????·?????????????????????? ??????????20???????1?????8,000??????????? ???????????1?8??????????? ?????1983?????20?????????????????? ?????·????????????????? ?????????1?????????350?????????? ???????????????????????????????? ???????????????????????????????????? ???????????????????????????????????? ?Oracle Exadata???????????????????? ????????????????? - - - ???????????????????????Oracle Cloud Computing Summit - Database & Exadata Day ~????????????????????????????? ???????????????????? ?Oracle Cloud Computing Summit - Database & Exadata Day? ??????????????? 1?8000??????????????Oracle Exadata?????(??????:B-3) ? ?:2010?6?15?(?)15:40~16:30 ???Oracle Cloud Computing Summit - Database & Exadata Day ?????? ???????

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 30- June 5, 2014

    - by OTN ArchBeat
    The list below is comprised of the Top 10 most popular articles, blog posts, videos, and other content shared over the last seven days with the more than 5,100 people fans of the OTN ArchBeat Facebook Page. What is REST? | Maarten Smeets "Most Middleware developers will encounter RESTful services," says Oracle SOA / BPM / Java integration specialist Maarten Smeets. "It is good to understand what they are, what they should be and how they work." His extensive post will help you achieve that understanding. Integrating with Fusion Applications using SOAP web services and REST APIs | Arvind Srinivasamoorth This article, part one of Arvind Srinivasamoorth's two-part series on Integrating with Fusion Applications using SOAP web services and REST APIs, shows you how to identify the Fusion Applications SOAP web service to be invoked. Oracle Technology Network | Architect Community Have you visited the OTN Solution Architect homepage lately? I've just updated it with information about the big OTN Virtual Tech Summit on July 9, plus the latest OTN tech articles, and a fresh list of community videos and podcasts. Check it out! Starting and Stopping a Java EE Environment when using Oracle WebLogic | Rene van Wijk Oracle ACE Director and Oracle Fusion Middleware specialist Rene van Wijk explores ways to simplify the life-cycle management of a Java EE environment through the use of scripts developed with WebLogic Scripting Tool and Linux Bash. Application Composer Series: Where and When to use Groovy | Richard Bingham Richard Bingham describes his post as "more of a reference than an article." The post is comprised of a table that highlights where you can add your own custom logic via Groovy code and when you might use the various features. Kscope 2014: HFM Metadata Diagnostics | Eric Erikson Oracle Certified Hyperion Financial Management Specialist Eric Erikson will present three sessions at ODTUG Kscope 2014, June 22-26 in Seattle. Why should you care? Watch the video. Tuning Asynchronous Web Services in Fusion Applications | Jian Liang This article, the fourth in solution architect Jian Liang's five-part series on Fusion Applications and asynchronous Web Services, shows you how to conduct performance tuning of the asynchronous web services in relation to Fusion Applications. IDM FA Integration Flows | Thiago Leoncio Fusion Applications uses the Oracle Identity Management for its identity store and policy store by default. This article by solution architect Thiago Leoncio explains how user and role flows work from different points of view, using key IDM products for each flow in detail. GoldenGate and Oracle Data Integrator - A Perfect Match in 12c... Part 1: Getting Started | Michael Rainey Michael Rainey has already written extensively about about integration between Oracle Data Integrator and GoldenGate -- but he's not done. "With the release of the 12c versions of ODI and GoldenGate last October, and a soon-to-be-updated reference architecture, it’s time to write a few posts on the subject again, " he says. Here's the first of those posts. Video: Kscope 2014 Preview: Tim Tow on Essbase Java API and ODTUG Community Oracle ACE Director and ODTUG board member Tim Tow talks about his Kscope 2014 sessions focused on the Essbase Java API in this short video interview.

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Build-time dependency resolving coming to Entity Framework. Now, how about those BI tools too?

    - by jamiet
    Three months ago I wrote a blog post entitled Some thoughts on Visual Studio database references and how they should be used for SQL Server BI where I shared some thoughts on a feature available to database developers in Visual Studio 2010 that I would love to see added to SQL Server Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS). In there I said: Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 and one of the features that has most impressed me has been database references.   Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references.   It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? In that blog post I shared links to three Connect submissions where I requested this feature be added to SSIS, SSAS & SSRS. In addition I also submitted a request that the feature be extended to .Net projects so that any reference to a database object in a .Net assembly can be resolved at build time. That Connect submission is at [Entity FX] Use database references to constrain the EDM and overnight it received this comment from Microsoft: We have been working on this feature for a while and and will be available soon This is really good news - it improves the Microsoft developer ecosystem by ensuring invalid references to database references get caught at build time (ideally as part of a Continuous integration build) rather than run time. [Hopefully it might nip this code-first nonsense in the bud too (Ooo...way to incite flame comments :) ) ]. If you want to see this feature in action then check out a video from Teched Europe last month entitled SQL Server Developer Tools Code-named "Juneau" where it is demo'd by Lance Delano and Tim Laverty.   The point of this blog post though is not just to draw attention to this forthcoming feature for .Net developers, it is to ask you to petition Microsoft to get this feature added to SSIS/SSAS/SSRS too. After all, we already know (from the video above) that the feature is coming to this new code-name Juneau development environment plus we also know that Juneau will be the development environment for SSIS/SSAS/SSRS as well - is it really much of a stretch to expect the BI tools to have access to this great feature too? I don't think so and if you agree with me then I urge you to vote and add a comment to the Connection submissions that are requesting this feature. They are at: [SSAS] Declare Object Dependancies [SSRS] Declare Object Dependancies [SSIS] Declare Object Dependancies (Update, Apparently someone at Microsoft has deemed it necassary to set this to private and I am not able to change it back even though I submitted it. You can still vote on the other two though.) Let's close that SQL Developer Gap!   @Jamiet    

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Multiple Base Addresses and Multiple Endpoints in WCF

    - by mnhab
    I'm using two bindings TCP and HTTP. I want to give mex data on both bindings. What I want is that the mexHttpBinding only exposes the HTTP services while the mexTcpBinding exposes TCP services only. Or is this possible that I access stats service only from HTTP binding and the eventLogging service from TCP? For Example: For TCP I should only have net.tcp://localhost:9001/ABC/mex net.tcp://localhost:9001/ABC/eventLogging For HTTP http://localhost:9002/ABC/stats http://localhost:9002/ABC/mex When I connect with any of the base address (using the WCF Test Client) I'm able to access all the services? Like when I connect with net.tcp://localhost:9001/ABC I'm able to use the services which are offered on the HTTP binding. Why is that so? <system.serviceModel> <services> <service behaviorConfiguration="ABCServiceBehavior" name="ABC.Data.DataServiceWCF"> <endpoint address="eventLogging" binding="netTcpBinding" contract="ABC.Campaign.IEventLoggingService" /> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint address="stats" binding="basicHttpBinding" contract="ABC.Data.IStatsService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9001/ABC" /> <add baseAddress="http://localhost:9002/ABC" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ABCServiceBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >