Search Results

Search found 1553 results on 63 pages for 'tim perry'.

Page 62/63 | < Previous Page | 58 59 60 61 62 63  | Next Page >

  • jQuery: resizing element cuts off parent's background

    - by Justine
    Hi, I've been trying to recreate an effect from this tutorial: http://jqueryfordesigners.com/jquery-look-tim-van-damme/ Unfortunately, I want a background image underneath and because of the resize going on in JavaScript, it gets resized and cut off as well, like so: http://dev.gentlecode.net/dotme/index-sample.html - you can view source there to check the HTML, but basic structure looks like this: <div class="page"> <div class="container"> div.header ul.nav div.main </div> </div> Here is my jQuery code: $('ul.nav').each(function() { var $links = $(this).find('a'), panelIds = $links.map(function() { return this.hash; }).get().join(","), $panels = $(panelIds), $panelWrapper = $panels.filter(':first').parent(), delay = 500; $panels.hide(); $links.click(function() { var $link = $(this), link = (this); if ($link.is('.current')) { return; } $links.removeClass('current'); $link.addClass('current'); $panels.animate({ opacity : 0 }, delay); $panelWrapper.animate({ height: 0 }, delay, function() { var height = $panels.hide().filter(link.hash).show().css('opacity', 1).outerHeight(); $panelWrapper.animate({ height: height }, delay); }); }); var showtab = window.location.hash ? '[hash=' + window.location.hash + ']' : ':first'; $links.filter(showtab).click(); }); In this example, panelWrapper is a div.main and it gets resized to fit the content of tabs. The background is applied to the div.page but because its child is getting resized, it resizes as well, cutting off the background image. It's hard to explain so please look at the link above to see what I mean. I guess what I'm trying to ask is: is there a way to resize an element without resizing its parent? I tried setting height and min-height of .page to 100% and 101% but that didn't work. I tried making the background image fixed, but nada. It also happens if I add the background to the body or even html. Help?

    Read the article

  • top Tweets SOA Partner Community – September 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity OracleBlogs ?Oracle SOA Suite for healthcare integration Dashboard http://ow.ly/1mcJvp SOA Community ?Lost in Translation &ndash; Common Mistakes Interpreting Patterns &ndash; Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service… ServiceTechSymposium Matthias Zieger, Accenture just added to the agenda to co-present: "Service Modeling & BPM Business Value Patterns" http://ow.ly/ddu7A ServiceTechSymposium ?Newly updated session title and abstract: "Big Data and its impact on SOA", by Demed L'Her, Oracle. http://ow.ly/diOq2 Deepak Arora ?To PaaS or SaaS - the latest discussions with customers using SOA Suite - what are your thoughts #soa #soacommunity SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/08/28/top-tweets-soa-partner-community-august-2012/ … #soacommunity Sandor Nieuwenhuijs Checkout the BeNeLux Architectural Networking Event during Oracle Open World - meet your peers and the experts http://www.ddg-servicecenter.com/networkmanager/oow/architect/default.aspx … SOA Community ?top Tweets SOA Partner Community &ndash; August 2012 http://wp.me/p10C8u-uf SOA Community ?Follow SOA Community on facebook http://www.facebook.com/soacommunity #soacommunity SOA Community ?New Service to promote Your SOA & BPM events at http://oracle.com/events for SOA & BPM Specialized Partners Only! #soacommunity #opn #oracle Jan van Zoggel ?Hotel check, flight check, overview of sessions to visit check http://jvzoggel.wordpress.com/2012/08/27/soa-cloud-servicetech-symposium/ … I'm ready for SOA, Cloud & Service Technology Symposium SOA Community SOA & BPM Specialized Partners Only! New Service to Promote Your SOA & BPM Events at http://oracle.com/events http://wp.me/p10C8u-sH SOA Community Call for content for the next community newsletter. Do you want to publish your success & best practice? Send it @soacommunity #soacommunity SOA Community SOA Adoption in the Brazilian Ministry of Health - Case Study by Ricardo Puttini, University of Brasilia @ SOA, Cloud & Service… Jan van Zoggel ?Just registered for the 5th International SOA, Cloud & Service Technology Symposium in London. Looking forward to it. http://www.servicetechsymposium.com/ OTNArchBeat ?Want to prepare for Oracle SOA Specialization? @t_winterberg offers a suggestion. http://pub.vitrue.com/5Hqu OTNArchBeat ?Oracle BPM enable BAM | @deltalounge http://pub.vitrue.com/BCwj SOA Community Presentations & Training material OFM Summer Camps & Impressions & Feedback http://wp.me/p10C8u-sF Emiel Paasschens Nice! Pdf document on how to use a #Oracle #SOA Suite Domain Value Map (DVM) in the OSB: http://bit.ly/RzyS9w #yam OracleBlogs ?Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL http://ow.ly/1m4lz7 demed ?Free VIP pass for @techsymp if you are in London Sep. 24-25. Be the first one to retweet this and I'll DM you details! http://www.servicetechsymposium.com/speaker_bios.php?id=demed_lher … Jan van Zoggel blogpost: Oracle Service Bus duplicate message check using Oracle Coherence caching http://jvzoggel.wordpress.com/2012/08/20/osb-duplicate-message-with-coherence/ … OTNArchBeat ?Oracle Service Bus duplicate message check using Coherence | @jvzoggel http://pub.vitrue.com/ckY8 Oracle UPK & Tutor Synaptis and Oracle Present: Leveraging UPK Throughout the Project Lifecycle: Leveraging UPK throughout the Proj... http://bit.ly/OS2Rbg Rolando Carrasco ?New entry @ oracleradio http://bit.ly/SEvwwS @soacommunity @oracleace How to identify duplicated messages on Oracle SOA SUITE? SOA Community ?Business Driven Development (BDD) Demo Now Available! http://wp.me/p10C8u-sf OTNArchBeat ?Installing Oracle SOA Suite10g on Oracle Enterprise Linux | @lonnekedikmans http://pub.vitrue.com/BEyD OTNArchBeat ?Best practices for Oracle real-time data integration | Frank Ohlhorst http://pub.vitrue.com/1fH1 ServiceTechSymposium ?New OTN podcast featuring speakers Thomas Erl, Tim Hall and Demed L’Her just published. Tune into 1st 3 parts here: http://ow.ly/d1RRn OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 4 of 4 - Best selling SOA author Thomas Erl talks about the latest title... http://ow.ly/1m0txY SOA Community Win a free conference pass for the SOA, Cloud + Service Technology Symposium &ndash; become a soacommunity facebook fan!… Lonneke Dikmans VENNSTER BLOG: Installing Oracle SOA Suite10g on Oracle Enterpris... http://blog.vennster.nl/2012/08/installing-oracle-soa-suite-10g-on.html?spref=tw … PeterPaul vande Beek published a blog on exporting Oracle #BPM metrics to a #DWH http://www.deltalounge.net/wpress/2012/08/export-oracle-bpm-metrics-to-a-data-warehouse/ … #soacommunity SOA Community ?Do you follow us on facebook http://www.facebook.com/soacommunity #soacommunity C2B2 Consulting ?Cloud-based Enterprise Architecture by Steve Millidge, C2B2 Consulting @ SOA, Cloud &amp; Service … http://wp.me/p10C8u-sv via @soacommunity Gertjan van het Hof Storing SCA Metadata in the Oracle Metadata Services Repository http://www.oracle.com/technetwork/articles/soa/fonnegra-storing-sca-metadata-1715004.html?msgid=3-6903117805 … arjankramer ?Encrypted OSB Service account passwords http://dlvr.it/20hbNV Richard van Tilborg BPM the Battle http://lnkd.in/yFAJaW OTNArchBeat Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL | @RahejaRajesh http://pub.vitrue.com/YDCD SOA Proactive ?Webcast: Introduction to SOA Human Workflow, 8/23, 10 AM EDT. Register @ http://bit.ly/Nx77sY Lucas Jellema ?Programmatically admnistration of OSB using JXM & MBeans. Interesting example is given in https://blogs.oracle.com/ateamsoab2b/entry/automatic_disabling_proxy_service_when … orclateamsoa ?A-Team Blog #ateam: Automatically Disable Proxy Service to avoid overloading OSB http://ow.ly/1lXGKV Atul_Kumar ?Oracle Enterprise Gateway – OEG 11gR1 (11.1.1.*) for beginners http://goo.gl/fb/EJboE Estafet Limited Advanced SOA Boot camp @soacommunity in Munich was excellent.@wlscommunity Learnt a lot and liked the format. SOA Community Oracle Fusion Applications Design Patterns Now Available For Developers by Ultan O'Broin http://wp.me/p10C8u-sd SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance TAGS: Fusion Apps, Exalogic, BPM Suite, SOA Suite, e-Business Suite Integration

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Key Promoter for NetBeans

    - by Geertjan
    Whenever a menu item or toolbar button is clicked, it would be handy if NetBeans were to tell you 'hey, did you know, you can actually do this via the following keyboard shortcut', if a keyboard shortcut exists for the invoked action. After all, ultimately, a lot of developers would like to do everything with the keyboard and a key promoter feature of this kind is a helpful tool in learning the keyboard shortcuts related to the menu items and toolbar buttons you're clicking with your mouse. Above, you see the balloon message that appears for each menu item and toolbar button that you click and, below, you can see a list of all the actions that have been logged in the Notifications window. That happens automatically when an action is invoked (assuming the plugin described in this blog entry is installed), showing the display name of the action, together with the keyboard shortcut, which is presented as a hyperlink which, when clicked, re-invokes the action (which might not always be relevant, especially for context-sensitive actions, though for others it is quite useful, e.g., reopen the New Project wizard). And here's all the code. Notice that I'm hooking into the 'uigestures' functionality, which was suggested by Tim Boudreau, and I have added my own handler, which was suggested by Jaroslav Tulach, which gets a specific parameter from each new log entry handled by the 'org.netbeans.ui.actions' logger, makes sure that the parameter actually is an action, and then gets the relevant info from the action, if the relevant info exists: @OnShowingpublic class Startable implements Runnable {    @Override    public void run() {        Logger logger = Logger.getLogger("org.netbeans.ui.actions");        logger.addHandler(new StreamHandler() {            @Override            public void publish(LogRecord record) {                Object[] parameters = record.getParameters();                if (parameters[2] instanceof Action) {                    Action a = (Action) parameters[2];                    JMenuItem menu = new JMenuItem();                    Mnemonics.setLocalizedText(                            menu,                             a.getValue(Action.NAME).toString());                    String name = menu.getText();                    if (a.getValue(Action.ACCELERATOR_KEY) != null) {                        String accelerator = a.getValue(Action.ACCELERATOR_KEY).toString();                        NotificationDisplayer.getDefault().notify(                                name,                                 new ImageIcon("/org/nb/kp/car.png"),                                 accelerator,                                 new ActionListener() {                            @Override                            public void actionPerformed(ActionEvent e) {                                a.actionPerformed(e);                            }                        });                    }                }            }        });    }} Indeed, inspired by the Key Promoter in IntelliJ IDEA. Interested in trying it out? If there's interest in it, I'll put it in the NetBeans Plugin Portal.

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • top Tweets SOA Partner Community – August 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Lucas Jellema ?Published an article about organizing Fusion Middleware Administration: http://technology.amis.nl/2012/07/31/organizing-fusion-middleware-administration-in-a-smart-and-frugal-way … - many organizations are struggling with this. ServiceTechSymposium Countdown to the Early Bird Registration Discount deadline. Only 4 days left! http://ow.ly/cBCiv demed ?Good chatting w Bob Rhubart, Thomas Erl & Tim Hall on SOA & Cloud Symposium https://blogs.oracle.com/archbeat/entry/podcast_show_notes_thomas_erl … @soaschool @OTNArchBeat -- CU in London! SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/07/30/top-tweets-soa-partner-community-july-2012/ … #soacommunity SOA Community ?Are You a facebook member - do You follow http://www.facebook.com/soacommunity ? #soacommunity #soa SOA Community ?SOA 24/7 - Home Page: http://soa247.com/#.UBJsN8n3kyk.twitter … #soacommunity OracleBlogs ?Handling Large Payloads in SOA Suite 11g http://ow.ly/1lFAih OracleBlogs ?SOA Community Newsletter July 2012 http://ow.ly/1lFx6s OTNArchBeat Podcast Show Notes: Thomas Erl on SOA, Cloud, and Service Technology http://bit.ly/OOHTUJ SOA Community SOA Community Newsletter July 2012 http://wp.me/p10C8u-s7 OTNArchBeat ?OTN ArchBeat Podcast: Thomas Erl on SOA, Cloud, and Service Technology - Part 1 http://pub.vitrue.com/fMti OProcessAccel ?Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 1 of 4 - A conversation with SOA, Cloud, and Service Technology Symposiu... http://ow.ly/1lDyAK OracleBlogs ?SOA Suite 11g PS5 Bundled Patch 3 (11.1.1.6.3) http://ow.ly/1lCW1S Simon Haslam My write-up of the virtues of the #ukoug App Server & Middleware SIG http://bit.ly/LMWdfY What's important to you for our next meeting? SOA Community SOA Partner Community Survey 2012 http://wp.me/p10C8u-qY Simone Geib ?RT @jswaroop: #Oracle positioned in the Leader's quadrant - Gartner Magic Quadrants for Application Infrastructure (SOA & SOA Gov)... ServiceTechSymposium New Supporting Organization, IBTI has joined the Symposium! http://www.servicetechsymposium.com/ orclateamsoa ?A-Team Blog #ateam: BPM 11g Task Form Version Considerations http://ow.ly/1lA7XS OTNArchBeat Oracle content at SOA, Cloud and Service Technology Symposium (and discount code!) http://pub.vitrue.com/FPcW OracleBlogs ?BPM 11g Task Form Version Considerations http://ow.ly/1lzOrX OTNArchBeat BPM 11g #ADF Task Form Versioning | Christopher Karl Chan #fusionmiddleware http://pub.vitrue.com/0qP2 OTNArchBeat Lightweight ADF Task Flow for BPM Human Tasks Overview | @AndrejusB #fusionmiddleware http://pub.vitrue.com/z7x9 SOA Community Oracle Fusion Middleware Summer Camps in Lisbon report by Link Consulting http://middlewarebylink.wordpress.com/2012/07/20/oracle-fusion-middleware-summer-camps-in-lisbon/ … #ofmsummercamps #soa #bpm SOA Community ?Clemens Utschig-Utschig & Manas Deb The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts… Simone Geib ?RT @oprocessaccel: Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … jornica ?Report from Oracle Fusion Middleware Summer Camps in Munich: SOA Suite 11g advanced training experiences @soacommunity http://bit.ly/Mw3btE Simone Geib ?Bruce Tierney: Update - SOA & BPM Customer Insights Webcast Series: | https://blogs.oracle.com/SOA/entry/update_soa_bpm_customer_insights … OTNArchBeat Business SOA: Thinking is Dead | @mosesjones http://pub.vitrue.com/k8mw esentri ?had 3 great days in Munich at #Oracle #soacommunity Summercamp! Special thanks to Geoffroy de Lamalle from eProseed! Danilo Schmiedel ?Used my time in train to setup the ps5 soa/bpm vbox-image.Works like a dream. Setup-Readme is perfect! Saves a lot of time!!! @soacommunity 18 Jul SOA Community ?THANKS for the excellent OFM summer camps - save trip home - share your pictures at http://www.facebook.com/soacommunity #ofmsummercamps #soacommunity doors BBQ-party with Oracle @soacommunity. 5Star! #lovemunich #ofmsummercamps pic.twitter.com/ztfcGn2S leonsmiers ?New #Capgemini blog post "Continuous Improvement of Business Agility" http://bit.ly/Lr0EwG #bpm #yam Eric Elzinga ?MDS Explorer utility, http://see.sc/4qdb43 #soasuite ServiceTechSymposium ?@techsymp New speaker Demed L’Her from Oracle has been added to the symposium calendar. http://ow.ly/cjnyw SOA Community ?Last day of the Fusion Middleware summer camps - we continue at 9.00 am. send us your barbecue pictures! #ofmsummercamps #soacommunity SOA Community ?Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting http://middlewarebylink.wordpress.com/2012/06/26/delivering-soa-governance-with-eams-and-oracle-enterprise-repository/ … #soacommunity #soa #oer OracleBlogs ?Process Accelerator Kit http://ow.ly/1loaCw 15 Jul SOA Community ?Sun is back in Munich! Send your pictures Middleware summer camps! #ofmsummercamps We start tomorrow 11.00 at Oracle pic.twitter.com/6FStxomk Walter Montantes ?Gracias, Obrigado, Thank you, Danke a Lisboa y a @soacommunity @wlscommunity. From the Mexican guys!! cc @mikeintoch #ofmsummercamps Andrejus Baranovskis Tips & Tricks How to Run Oracle BPM 11g PS5 Workspace from Custom ADF 11g Application http://fb.me/1zOf3h2K8 JDeveloper & ADF ?Fusion Apps Enterprise Repository - Explained http://dlvr.it/1rpjWd Steve Walker ?Oracle #Exalogic is the logical choice for running business applications. Exalogic Software 2.0 launches 7/25. Reg at http://bit.ly/NedQ9L A. Chatziantoniou ?Landed in rainy Amsterdam after a great week in Lisbon for the #ofmsummercamps - multo obrigado for Jürgen for another fantastic event SOA Community ?Teams present #BPM11g POC results at #ofmsummercamps - great job! #soacommunity pic.twitter.com/0d4txkWF Sabine Leitner ?#DOAG SIG Middleware 29.08.2012 Köln über MW, Administration, Monitoring http://bit.ly/P47w82 @soacommunity @OracleMW @OracleFMW 12 Jul philmulhall ?Thanks @soacommunity for a great week at the #ofmsummercamps. Hard work done so time for a few cold ones in Lisboa. pic.twitter.com/LVUUuwTh peter230769 ?RT: andrea_rocco_31: RT @soacommunity: Enjoy the networking event at #ofmsummercamps want to attend next time ... pic.twitter.com/D1HRndi4 Niels Gorter ?#ofmsummercamps dinner in Lisbon. Great weather, scenery, training, people, on and on. Big THANKS @soacommunity JDeveloper & ADF ?Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Domain http://dlvr.it/1r0c2j Andrea Rocco ?RT @soacommunity: Jamy pastry at cafe Belem - who is the ghost there?!? http://via.me/-2x33uk6 Simon Haslam ?Sounds great - sorry I couldn't make it. RT @soacommunity: 6pm BPM advanced training hard work to build the POC #ofmsummercamps philmulhall ?A well earned rest after a hard days work @soacommunity #summercamps pic.twitter.com/LKK7VOVS philmulhall ?Some more hard working delegates @soacommunity #summercamps pic.twitter.com/gWpk1HZh SOA Community ?Error message at the BPM POC - will The #ace director understand the message and solve it? #ofmsummercamps pic.twitter.com/LFTEzNck Daniel Kleine-Albers ?posted on the #thecattlecrew blog: Assigning more memory to JDeveloper http://thecattlecrew.wordpress.com/2012/07/10/jdeveloper-quicktip-assigning-more-memory/ … OTNArchBeat ?BAM design pointers | Kavitha Srinivasan http://pub.vitrue.com/TOhP SOA Community ?Did you receive the July SOA community newsletter? read it! Want to become a member http://www.oracle.com/goto/emea/soa #soacommunity #soa #opn OracleBlogs ?Markus Zirn, Big Data with CEP and SOA @ SOA, Cloud &amp; Service Technology Symposium 2012 http://ow.ly/1lcSkb Andrejus Baranovskis Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Eric Elzinga ?Service Facade design pattern in OSB, http://bit.ly/NnOExN Eric Elzinga ?New BPEL Thread Pool in SOA 11g for Non-Blocking Invoke Activities from 11.1.1.6 (PS5), http://bit.ly/NnOc2G Gilberto Holms New Post: Siebel Connection Pool in Oracle Service Bus 11g http://wp.me/pRE8V-2z Oracle UPK & Tutor ?UPK Pre-Built Content Update: UPK pre-built content development efforts are always underway and growing. Ove... http://bit.ly/R2HeTj JDeveloper & ADF ?Troubleshooting BPMN process editor problems in 11.1.1.6 http://dlvr.it/1p0FfS orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove... http://ow.ly/1kYqES SOA Community BPMN process editor problems in 11.1.1.6 by Mark Nelson http://redstack.wordpress.com/2012/06/27/bpmn-process-editor-problems-in-11-1-1-6 … #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) http://fb.me/1coX4r1X1 OTNArchBeat ?A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser http://pub.vitrue.com/mQVZ OTNArchBeat ?BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • WPF Binding Error reported when Binding appears to work fine

    - by Noldorin
    I am trying to create a custom TabItem template/style in my WPF 4.0 application (using VS 2010 Pro RTM), but inspite of everything seeming to work correctly, I am noticing a binding error in the trace window. The resource dictionary XAML I use to style the TabItems of a TabControl is given in full here. (Just create a simple TabControl with several items and apply the given ResourceDictionary to test it out.) Specifically, the error is occurring due to the following line (discovered through trial and error testing, since Visual Studio isn't actually reporting it at design tim <TranslateTransform X="{Binding ActualWidth, ElementName=leftSideBorderPath}"/> The full error given in the trace (Ouput window) is the following: System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=ActualWidth; DataItem=null; target element is 'TranslateTransform' (HashCode=35345840); target property is 'X' (type 'Double') The error occurs on load and is repeated 5 times then (note that I have 3 tab items in my example). It also occurs consistently and repeatedly whenever for the Window is resized, for example - filling the Output window. Perhaps every time the TabItem layout is updated? And again, though it is not reported, the error very much seems to be due to the fact that I am binding to any element at all, not specifically leftSideBorderPath or the the ActualWidth propertry. For example, changing this line to the following fixes things. <TranslateTransform X="25"/> Unfortunately, hard-coding the value isn't really an option. This issue seems very strange to me in that the binding does appear to be giving the correct results. (Inspecting the X value of the TranslateTransform at runtime clearly shows the correct bound value, and the ClipGeometry when viewed is exactly what it hsould be.) Neither Visual Studio nor WPF seems to be giving me any more information on the cause of the error perhaps (setting PresentationTraceSources.TraceLevel to High doesn't help), yet the fact that things are working despite the error being reported inclines me to think that this is some fringe-case WPF bug. As a side note, the Visual Studio WPF designer and XAML editor are giving me a problem with the following line: <PathGeometry Figures="{Binding Source={StaticResource TabSideFillFigures}}"/> Although WPF (at runtime) is perfectly happy binding Figures to the TabSideFillFigures string, with the Binding enforcing the use of the TypeConverter, the XAML editor and WPF designer are complaining. All the XAML code for the ControlTemplate is underlined and I get the following errors in the Error List: Error 9 '{DependencyProperty.UnsetValue}' is not a valid value for the 'System.Windows.Controls.Control.Template' property on a Setter. C:\Users\Alex\Documents\Visual Studio 2010\Projects\Ircsil\devel\Ircsil\MainWindow.xaml 1 1 Ircsil Error 10 Object reference not set to an instance of an object. C:\Users\Alex\Documents\Visual Studio 2010\Projects\Ircsil\devel\Ircsil\Skins\Default\MainSkin.xaml 58 17 Ircsil Again, to repeat, everything works perfectly well at runtime, which is what makes this particularly odd... Could someone perhaps shed some light on these issues, in particular the first (which seems to be a potential WPF bug), and the latter (which seems to be a Visual Studio bug). Any sort of feedback or suggestions would be much appreciated!

    Read the article

  • GIT: clone works, remote push doesn't. Remote repository over copssh.

    - by Rui
    Hi all, I've "setup-a-msysgit-server-with-copssh-on-windows", following Tim Davis' guide and I was now learning how to use the git commands, following Jason Meridth's guide, and I have managed to get everything working fine, but now I can't pass the push command. I have set the server and the client on the same machine (for now), win7-x64. Here is some info of how things are set up: CopSSH Folder : C:/SSH/ Local Home Folder : C:/Users/rvc/ Remote Home Folder: C:/SSH/home/rvc/ # aka /cygdrive/c/SSH/home/rvc/ git remote rep : C:/SSH/home/rvc/myapp.git # empty rep At '/SSH/home/rvc/.bashrc' and 'Users/rvc/.bashrc': export HOME=/cygdrive/c/SSH/home/rvc gitpath='/cygdrive/c/Program Files (x86)/Git/bin' gitcorepath='/cygdrive/c/Program Files (x86)/Git/libexec/git-core' PATH=${gitpath}:${gitcorepath}:${PATH} So, cloning works (everything bellow is done via "Git Bash here" :P): rvc@RVC-DESKTOP /c/code $ git clone ssh://[email protected]:5858/SSH/home/rvc/myapp.git Initialized empty Git repository in C:/code/myapp/.git/ warning: You appear to have cloned an empty repository. rvc@RVC-DESKTOP /c/code $ cd myapp rvc@RVC-DESKTOP /c/code/myapp (master) $ git remote -v origin ssh://[email protected]:5858/SSH/home/rvc/myapp.git (fetch) origin ssh://[email protected]:5858/SSH/home/rvc/myapp.git (push) Then I create a file: rvc@RVC-DESKTOP /c/code/myapp (master) $ touch test.file rvc@RVC-DESKTOP /c/code/myapp (master) $ ls test.file Try to push it and get this error: rvc@RVC-DESKTOP /c/code/myapp (master) $ git add test.file rvc@RVC-DESKTOP /c/code/myapp (master) $ GIT_TRACE=1 git push origin master trace: built-in: git 'push' 'origin' 'master' trace: run_command: 'C:\Users\rvc\bin\plink.exe' '-batch' '-P' '5858' '[email protected] 68.1.65' 'git-receive-pack '\''/SSH/home/rvc/myapp.git'\''' git: '/SSH/home/rvc/myapp.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly "git: '/SSH/home/rvc/myapp.git' is not a git command. See 'git --help'." .. what?! So, can anybody help? If you need any aditional information that I've missed, just ask. Thanks in advanced. EDIT: RAAAGE!! I'm having the same problem again, but now with ssh: rvc@RVC-DESKTOP /c/code/myapp (master) $ GIT_TRACE=1 git push trace: built-in: git 'push' trace: run_command: 'ssh' '-p' '5885' '[email protected]' 'git-receive-pack '\''/ SSH/home/rvc/myapp.git'\''' git: '/SSH/home/rvc/myapp.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly I've tried GUI push, and shows the same message. git: '/SSH/home/rvc/myapp.git' is not a git command. See 'git --help'. Pushing to ssh://[email protected]:5885/SSH/home/rvc/myapp.git fatal: The remote end hung up unexpectedly Here's the currents .bashrc: C:\Users\rvc.bashrc (I think this is used only by cygwin/git bash): export HOME=/c/SSH/home/rvc gitpath='/c/Program Files (x86)/Git/bin' gitcorepath='/c/Program Files (x86)/Git/libexec/git-core' export GIT_EXEC_PATH=${gitcorepath} PATH=${gitpath}:${gitcorepath}:${PATH} C:\SSH\home\rvc.bashrc (.. and this is used when git connects via ssh to the "remote" server): export HOME=/c/SSH/home/rvc gitpath='/cygdrive/c/Program Files (x86)/Git/bin' gitcorepath='/cygdrive/c/Program Files (x86)/Git/libexec/git-core' export GIT_EXEC_PATH=${gitcorepath} PATH=${gitpath}:${gitcorepath}:${PATH}

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Getting Started Building Windows 8 Store Apps with XAML/C#

    - by dwahlin
    Technology is fun isn’t it? As soon as you think you’ve figured out where things are heading a new technology comes onto the scene, changes things up, and offers new opportunities. One of the new technologies I’ve been spending quite a bit of time with lately is Windows 8 store applications. I posted my thoughts about Windows 8 during the BUILD conference in 2011 and still feel excited about the opportunity there. Time will tell how well it ends up being accepted by consumers but I’m hopeful that it’ll take off. I currently have two Windows 8 store application concepts I’m working on with one being built in XAML/C# and another in HTML/JavaScript. I really like that Microsoft supports both options since it caters to a variety of developers and makes it easy to get started regardless if you’re a desktop developer or Web developer. Here’s a quick look at how the technologies are organized in Windows 8: In this post I’ll focus on the basics of Windows 8 store XAML/C# apps by looking at features, files, and code provided by Visual Studio projects. To get started building these types of apps you’ll definitely need to have some knowledge of XAML and C#. Let’s get started by looking at the Windows 8 store project types available in Visual Studio 2012.   Windows 8 Store XAML/C# Project Types When you open Visual Studio 2012 you’ll see a new entry under C# named Windows Store. It includes 6 different project types as shown next.   The Blank App project provides initial starter code and a single page whereas the Grid App and Split App templates provide quite a bit more code as well as multiple pages for your application. The other projects available can be be used to create a class library project that runs in Windows 8 store apps, a WinRT component such as a custom control, and a unit test library project respectively. If you’re building an application that displays data in groups using the “tile” concept then the Grid App or Split App project templates are a good place to start. An example of the initial screens generated by each project is shown next: Grid App Split View App   When a user clicks a tile in a Grid App they can view details about the tile data. With a Split View app groups/categories are shown and when the user clicks on a group they can see a list of all the different items and then drill-down into them:   For the remainder of this post I’ll focus on functionality provided by the Blank App project since it provides a simple way to get started learning the fundamentals of building Windows 8 store apps.   Blank App Project Walkthrough The Blank App project is a great place to start since it’s simple and lets you focus on the basics. In this post I’ll focus on what it provides you out of the box and cover additional details in future posts. Once you have the basics down you can move to the other project types if you need the functionality they provide. The Blank App project template does exactly what it says – you get an empty project with a few starter files added to help get you going. This is a good option if you’ll be building an app that doesn’t fit into the grid layout view that you see a lot of Windows 8 store apps following (such as on the Windows 8 start screen). I ended up starting with the Blank App project template for the app I’m currently working on since I’m not displaying data/image tiles (something the Grid App project does well) or drilling down into lists of data (functionality that the Split App project provides). The Blank App project provides images for the tiles and splash screen (you’ll definitely want to change these), a StandardStyles.xaml resource dictionary that includes a lot of helpful styles such as buttons for the AppBar (a special type of menu in Windows 8 store apps), an App.xaml file, and the app’s main page which is named MainPage.xaml. It also adds a Package.appxmanifest that is used to define functionality that your app requires, app information used in the store, plus more. The App.xaml, App.xaml.cs and StandardStyles.xaml Files The App.xaml file handles loading a resource dictionary named StandardStyles.xaml which has several key styles used throughout the application: <Application x:Class="BlankApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:BlankApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- Styles that define common aspects of the platform look and feel Required by Visual Studio project and item templates --> <ResourceDictionary Source="Common/StandardStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>   StandardStyles.xaml has style definitions for different text styles and AppBar buttons. If you scroll down toward the middle of the file you’ll see that many AppBar button styles are included such as one for an edit icon. Button styles like this can be used to quickly and easily add icons/buttons into your application without having to be an expert in design. <Style x:Key="EditAppBarButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="EditAppBarButton"/> <Setter Property="AutomationProperties.Name" Value="Edit"/> <Setter Property="Content" Value="&#xE104;"/> </Style> Switching over to App.xaml.cs, it includes some code to help get you started. An OnLaunched() method is added to handle creating a Frame that child pages such as MainPage.xaml can be loaded into. The Frame has the same overall purpose as the one found in WPF and Silverlight applications - it’s used to navigate between pages in an application. /// <summary> /// Invoked when the application is launched normally by the end user. Other entry points /// will be used when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args">Details about the launch request and process.</param> protected override void OnLaunched(LaunchActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; // Do not repeat app initialization when the Window already has content, // just ensure that the window is active if (rootFrame == null) { // Create a Frame to act as the navigation context and navigate to the first page rootFrame = new Frame(); if (args.PreviousExecutionState == ApplicationExecutionState.Terminated) { //TODO: Load state from previously suspended application } // Place the frame in the current Window Window.Current.Content = rootFrame; } if (rootFrame.Content == null) { // When the navigation stack isn't restored navigate to the first page, // configuring the new page by passing required information as a navigation // parameter if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } // Ensure the current window is active Window.Current.Activate(); }   Notice that in addition to creating a Frame the code also checks to see if the app was previously terminated so that you can load any state/data that the user may need when the app is launched again. If you’re new to the lifecycle of Windows 8 store apps the following image shows how an app can be running, suspended, and terminated.   If the user switches from an app they’re running the app will be suspended in memory. The app may stay suspended or may be terminated depending on how much memory the OS thinks it needs so it’s important to save state in case the application is ultimately terminated and has to be started fresh. Although I won’t cover saving application state here, additional information can be found at http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465099.aspx. Another method in App.xaml.cs named OnSuspending() is also included in App.xaml.cs that can be used to store state as the user switches to another application:   /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender">The source of the suspend request.</param> /// <param name="e">Details about the suspend request.</param> private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } The MainPage.xaml and MainPage.xaml.cs Files The Blank App project adds a file named MainPage.xaml that acts as the initial screen for the application. It doesn’t include anything aside from an empty <Grid> XAML element in it. The code-behind class named MainPage.xaml.cs includes a constructor as well as a method named OnNavigatedTo() that is called once the page is displayed in the frame.   /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } /// <summary> /// Invoked when this page is about to be displayed in a Frame. /// </summary> /// <param name="e">Event data that describes how this page was reached. The Parameter /// property is typically used to configure the page.</param> protected override void OnNavigatedTo(NavigationEventArgs e) { } }   If you’re experienced with XAML you can switch to Design mode and start dragging and dropping XAML controls from the ToolBox in Visual Studio. If you prefer to type XAML you can do that as well in the XAML editor or while in split mode. Many of the controls available in WPF and Silverlight are included such as Canvas, Grid, StackPanel, and Border for layout. Standard input controls are also included such as TextBox, CheckBox, PasswordBox, RadioButton, ComboBox, ListBox, and more. MediaElement is available for rendering video or playing audio files. Some of the “common” XAML controls included out of the box are shown next:   Although XAML/C# Windows 8 store apps don’t include all of the functionality available in Silverlight 5, the core functionality required to build store apps is there with additional functionality available in open source projects such as Callisto (started by Microsoft’s Tim Heuer), Q42.WinRT, and others. Standard XAML data binding can be used to bind C# objects to controls, converters can be used to manipulate data during the data binding process, and custom styles and templates can be applied to controls to modify them. Although Visual Studio 2012 doesn’t support visually creating styles or templates, Expression Blend 5 handles that very well. To get started building the initial screen of a Windows 8 app you can start adding controls as mentioned earlier. Simply place them inside of the <Grid> element that’s included. You can arrange controls in a stacked manner using the StackPanel control, add a border around controls using the Border control, arrange controls in columns and rows using the Grid control, or absolutely position controls using the Canvas control. One of the controls that may be new to you is the AppBar. It can be used to add menu/toolbar functionality into a store app and keep the app clean and focused. You can place an AppBar at the top or bottom of the screen. A user on a touch device can swipe up to display the bottom AppBar or right-click when using a mouse. An example of defining an AppBar that contains an Edit button is shown next. The EditAppBarButtonStyle is available in the StandardStyles.xaml file mentioned earlier. <Page.BottomAppBar> <AppBar x:Name="ApplicationAppBar" Padding="10,0,10,0" AutomationProperties.Name="Bottom App Bar"> <Grid> <StackPanel x:Name="RightPanel" Orientation="Horizontal" Grid.Column="1" HorizontalAlignment="Right"> <Button x:Name="Edit" Style="{StaticResource EditAppBarButtonStyle}" Tag="Edit" /> </StackPanel> </Grid> </AppBar> </Page.BottomAppBar> Like standard XAML controls, the <Button> control in the AppBar can be wired to an event handler method in the MainPage.Xaml.cs file or even bound to a ViewModel object using “commanding” if your app follows the Model-View-ViewModel (MVVM) pattern (check out the MVVM Light package available through NuGet if you’re using MVVM with Windows 8 store apps). The AppBar can be used to navigate to different screens, show and hide controls, display dialogs, show settings screens, and more.   The Package.appxmanifest File The Package.appxmanifest file contains configuration details about your Windows 8 store app. By double-clicking it in Visual Studio you can define the splash screen image, small and wide logo images used for tiles on the start screen, orientation information, and more. You can also define what capabilities the app has such as if it uses the Internet, supports geolocation functionality, requires a microphone or webcam, etc. App declarations such as background processes, file picker functionality, and sharing can also be defined Finally, information about how the app is packaged for deployment to the store can also be defined. Summary If you already have some experience working with XAML technologies you’ll find that getting started building Windows 8 applications is pretty straightforward. Many of the controls available in Silverlight and WPF are available making it easy to get started without having to relearn a lot of new technologies. In the next post in this series I’ll discuss additional features that can be used in your Windows 8 store apps.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Project Euler #15

    - by Aistina
    Hey everyone, Last night I was trying to solve challenge #15 from Project Euler: Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? I figured this shouldn't be so hard, so I wrote a basic recursive function: const int gridSize = 20; // call with progress(0, 0) static int progress(int x, int y) { int i = 0; if (x < gridSize) i += progress(x + 1, y); if (y < gridSize) i += progress(x, y + 1); if (x == gridSize && y == gridSize) return 1; return i; } I verified that it worked for a smaller grids such as 2×2 or 3×3, and then set it to run for a 20×20 grid. Imagine my surprise when, 5 hours later, the program was still happily crunching the numbers, and only about 80% done (based on examining its current position/route in the grid). Clearly I'm going about this the wrong way. How would you solve this problem? I'm thinking it should be solved using an equation rather than a method like mine, but that's unfortunately not a strong side of mine. Update: I now have a working version. Basically it caches results obtained before when a n×m block still remains to be traversed. Here is the code along with some comments: // the size of our grid static int gridSize = 20; // the amount of paths available for a "NxM" block, e.g. "2x2" => 4 static Dictionary<string, long> pathsByBlock = new Dictionary<string, long>(); // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static long progress(long x, long y) { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // create a textual representation of the remaining // block, for use in the dictionary string block = (gridSize - x) + "x" + (gridSize - y); // if a same block has not been processed before if (!pathsByBlock.ContainsKey(block)) { // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // and cache the result! pathsByBlock[block] = i; } // self-explanatory :) return pathsByBlock[block]; } Calling it 20 times, for grids with size 1×1 through 20×20 produces the following output: There are 2 paths in a 1 sized grid 0,0110006 seconds There are 6 paths in a 2 sized grid 0,0030002 seconds There are 20 paths in a 3 sized grid 0 seconds There are 70 paths in a 4 sized grid 0 seconds There are 252 paths in a 5 sized grid 0 seconds There are 924 paths in a 6 sized grid 0 seconds There are 3432 paths in a 7 sized grid 0 seconds There are 12870 paths in a 8 sized grid 0,001 seconds There are 48620 paths in a 9 sized grid 0,0010001 seconds There are 184756 paths in a 10 sized grid 0,001 seconds There are 705432 paths in a 11 sized grid 0 seconds There are 2704156 paths in a 12 sized grid 0 seconds There are 10400600 paths in a 13 sized grid 0,001 seconds There are 40116600 paths in a 14 sized grid 0 seconds There are 155117520 paths in a 15 sized grid 0 seconds There are 601080390 paths in a 16 sized grid 0,0010001 seconds There are 2333606220 paths in a 17 sized grid 0,001 seconds There are 9075135300 paths in a 18 sized grid 0,001 seconds There are 35345263800 paths in a 19 sized grid 0,001 seconds There are 137846528820 paths in a 20 sized grid 0,0010001 seconds 0,0390022 seconds in total I'm accepting danben's answer, because his helped me find this solution the most. But upvotes also to Tim Goodman and Agos :) Bonus update: After reading Eric Lippert's answer, I took another look and rewrote it somewhat. The basic idea is still the same but the caching part has been taken out and put in a separate function, like in Eric's example. The result is some much more elegant looking code. // the size of our grid const int gridSize = 20; // magic. static Func<A1, A2, R> Memoize<A1, A2, R>(this Func<A1, A2, R> f) { // Return a function which is f with caching. var dictionary = new Dictionary<string, R>(); return (A1 a1, A2 a2) => { R r; string key = a1 + "x" + a2; if (!dictionary.TryGetValue(key, out r)) { // not in cache yet r = f(a1, a2); dictionary.Add(key, r); } return r; }; } // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static Func<long, long, long> progress = ((Func<long, long, long>)((long x, long y) => { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // self-explanatory :) return i; })).Memoize(); By the way, I couldn't think of a better way to use the two arguments as a key for the dictionary. I googled around a bit, and it seems this is a common solution. Oh well.

    Read the article

< Previous Page | 58 59 60 61 62 63  | Next Page >