Search Results

Search found 446 results on 18 pages for 'allen shen'.

Page 17/18 | < Previous Page | 13 14 15 16 17 18  | Next Page >

  • Goodbye FY14, Welcome FY15!

    - by Alliances & Channels Redaktion
    FY14, ein spannendes Geschäftsjahr liegt gerade hinter uns. Das ist immer auch ein Anlass, um Bilanz zu ziehen. Lassen wir also gemeinsam 12 ereignisreiche Monate Revue passieren! Beim Blick auf die Ereignisse des FY14 stehen natürlich Sie, unsere Partner, an allererster Stelle, denn Sie leisten einen ungeheuer wichtigen Beitrag zum Erfolg von Oracle. Dafür möchte ich Ihnen heute im Namen von Oracle A&C ganz herzlich danken! Von all den Events und Highlights im Partnerbereich war die Oracle Open World auch in FY14 schon allein quantitativ das Beeindruckendste: 60.000 Besucherinnen und Besucher aus 145 Ländern, 2.555 Sessions und 3.599 Speaker. Die angereisten Partner kamen in San Francisco zum Oracle PartnerNetwork Exchange zusammen. Dort tauschten sie sich über aktuelle Fragen zu Applications, Cloud, Engineered Systems, Big Data sowie Industry Solutions aus – Themen die uns auch in FY15 sicher bewegen werden! FY14 war bei Oracle auch das Jahr der Datenbank-Offensive: Auf der Open World wurde die neue In-Memory-Option für Datenbanken präsentiert, das Schlagwort Datenbank-Tuning machte die Runde. Als Meilenstein gilt vor allem die enorme Beschleunigung, die mit Version 12.1.0.1 der Oracle Database 12c möglich wird. Diese und weitere Innovationen sorgten für viel positives Presseecho. Im Januar 2014 kamen die Partner aus ganz Deutschland nach München zum Oracle Partner Day und zur Verleihung der Oracle Excellence Awards. Wie immer war unsere Blogredaktion natürlich live vor Ort. Zu den Höhepunkten des Partner Day zählte die Key Note zur Oracle Strategie von Helene Lengler, Vice President Sales Fusion Middleware & Engineered Systems. Spannend für die Partner war auch der Blick in die Zukunft mit Andreas Zilch (Experton): Industrie 4.0 lautete eines seiner zentralen Themen - also die Frage der Informatisierung der klassischen Industrien und damit natürlich auch das Internet of Things. Ich freue mich auf neue Herausforderungen im FY2015 und vor allem auf die anregende Zusammenarbeit mit Ihnen! Wir werden gemeinsam daran arbeiten, spannende Projekte u.a. mit Big Data, Customer Experience oder Cloud zu entwickeln. Uns allen wünsche ich ein gutes, erfolgreiches Geschäftsjahr 2015. Herzlichst, Ihr Christian Werner Senior Director Alliances & Channels Deutschland

    Read the article

  • Goodbye FY14, Welcome FY15!

    - by Alliances & Channels Redaktion
    FY14, ein spannendes Geschäftsjahr liegt gerade hinter uns. Das ist immer auch ein Anlass, um Bilanz zu ziehen. Lassen wir also gemeinsam 12 ereignisreiche Monate Revue passieren! Beim Blick auf die Ereignisse des FY14 stehen natürlich Sie, unsere Partner, an allererster Stelle, denn Sie leisten einen ungeheuer wichtigen Beitrag zum Erfolg von Oracle. Dafür möchte ich Ihnen heute im Namen von Oracle A&C ganz herzlich danken! Von all den Events und Highlights im Partnerbereich war die Oracle Open World auch in FY14 schon allein quantitativ das Beeindruckendste: 60.000 Besucherinnen und Besucher aus 145 Ländern, 2.555 Sessions und 3.599 Speaker. Die angereisten Partner kamen in San Francisco zum Oracle PartnerNetwork Exchange zusammen. Dort tauschten sie sich über aktuelle Fragen zu Applications, Cloud, Engineered Systems, Big Data sowie Industry Solutions aus – Themen die uns auch in FY15 sicher bewegen werden! FY14 war bei Oracle auch das Jahr der Datenbank-Offensive: Auf der Open World wurde die neue In-Memory-Option für Datenbanken präsentiert, das Schlagwort Datenbank-Tuning machte die Runde. Als Meilenstein gilt vor allem die enorme Beschleunigung, die mit Version 12.1.0.1 der Oracle Database 12c möglich wird. Diese und weitere Innovationen sorgten für viel positives Presseecho. Im Januar 2014 kamen die Partner aus ganz Deutschland nach München zum Oracle Partner Day und zur Verleihung der Oracle Excellence Awards. Wie immer war unsere Blogredaktion natürlich live vor Ort. Zu den Höhepunkten des Partner Day zählte die Key Note zur Oracle Strategie von Helene Lengler, Vice President Sales Fusion Middleware & Engineered Systems. Spannend für die Partner war auch der Blick in die Zukunft mit Andreas Zilch (Experton): Industrie 4.0 lautete eines seiner zentralen Themen - also die Frage der Informatisierung der klassischen Industrien und damit natürlich auch das Internet of Things. Ich freue mich auf neue Herausforderungen im FY2015 und vor allem auf die anregende Zusammenarbeit mit Ihnen! Wir werden gemeinsam daran arbeiten, spannende Projekte u.a. mit Big Data, Customer Experience oder Cloud zu entwickeln. Uns allen wünsche ich ein gutes, erfolgreiches Geschäftsjahr 2015. Herzlichst, Ihr Christian Werner Senior Director Alliances & Channels Deutschland

    Read the article

  • JUDCon 2013 Trip Report

    - by reza_rahman
    JUDCon (JBoss Users and Developers Conference) 2013 was held in historic Boston on June 9-11 at the Hynes Convention Center. JUDCon is the largest get together for the JBoss community, has gone global in recent years but has it's roots in Boston. The JBoss folks graciously accepted a Java EE 7 talk from me and actually referenced my talk in their own sessions. I am proud to say this is my third time speaking at JUDCon/the Red Hat Summit over the years (this was the first time on behalf of Oracle). I had great company with many of the rock stars of the JBoss ecosystem speaking such as Lincoln Baxter, Jay Balunas, Gavin King, Mark Proctor, Andrew Lee Rubinger, Emmanuel Bernard and Pete Muir. Notably missing from JUDCon were Bill Burke, Burr Sutter, Aslak Knutsen and Dan Allen. Topics included Java EE, Forge, Arquillian, AeroGear, OpenShift, WildFly, Errai/GWT, NoSQL, Drools, jBPM, OpenJDK, Apache Camel and JBoss Tools/Eclipse. My session titled "JavaEE.Next(): Java EE 7, 8, and Beyond" went very well and it was a full house. This is our main talk covering the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1, Java EE Concurrency and the rest of the APIs in Java EE 7. I also briefly talked about the possibilities for Java EE 8. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman Besides presenting my talk, it was great to catch up with the JBoss gang and attend a few interesting sessions. On Sunday night I went to one of my favorite hangouts in Boston - the exalted Middle East Club as Rolling Stone refers to it (other cool spots in an otherwise pretty boring town is "the Church"). As contradictory as it might sound to the uninitiated, the Middle East Club is possibly the best place in Boston to simultaneously get great Middle Eastern (primarily Lebanese) food and great underground metal. For folks with a bit more exposure, this is probably not contradictory at all given bands like Acrassicauda and documentaries like Heavy Metal in Baghdad. Luckily for me they were featuring a few local Thrash metal bands from the greater Boston area. It wasn't too bad considering it was primarily amateur twenty-something guys (although I'm not sure I'm a qualified critic any more since I all but stopped playing about at that age). It's great Boston has the Middle East as an incubator to keep the rock, metal, folk, jazz, blues and indie scene alive. I definitely enjoyed JUDCon/Boston and hope to be part of the conference next year again.

    Read the article

  • Oracle Partner Day 2012: Neue Geschäftschancen warten auf Sie!

    - by A&C Redaktion
    Wie gut kennen Sie die Neuerungen der Oracle "One Red Stack"-Unternehmensstrategie? In Zukunft werden Sie, als unser Partner, das gesamte Oracle Produktportfolio – Software, Hardware und Applications – verkaufen können! Ihr Profit: die neue Vielfalt. Toppen Sie Ihre Marktpräsenz! Erschließen Sie sich neue Märkte, neue Kundengruppen. Potenzieren Sie Ihren Erfolg in Zukunft! Maximize your Potential – das ist Ihr Stichwort für das Geschäftsjahr und unser Motto für den Oracle Partner Day am 29. Oktober 2012 in der Commerzbank Arena in Frankfurt. Erleben Sie in unseren Breakout Sessions, wo das Vertriebs-Plus für Sie liegt. Was Ihr Kunde wissen muss. Und wo Sie überzeugen können. Alle parallel laufenden Breakout Sessions werden wiederholt, damit Sie in jedem Fall daran teilnehmen können. Konzentriert auf Erfolg: das neue Oracle Alliances & Channel-Konzept Wir liefern Ihnen die entscheidenden Argumente für Kunden, die auf Nachhaltigkeit und Investitionssicherheit setzen. Für Sie, wenn Sie in Zukunft mehr erreichen wollen! Für alle, die Partner Excellence aus einer Hand anbieten können. Erfahren Sie die Produktneuheiten von der Oracle Open World (OOW) in San Francisco (30. September bis 4. Oktober 2012) aus erster Hand. In der Expert- und Partner Service-Zone finden Sie Antworten zu allen Themenschwerpunkten. Nutzen Sie dazu das neue Speed-Dating-Format, um schnell den richtigen Ansprechpartner für Ihre Fragen zu Vertrieb und Produkten zu finden. Machen Sie den Test. Wir zahlen die Testgebühr! Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt an zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day. Wählen Sie Ihre Fachbereiche Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Kommen Sie zum Oracle Partner Day 2012 – aktives Partner Networking, Management Kontakte und Expertenwissen inklusive! Sichern Sie sich jetzt einen der begehrten Plätze und Ihre Teilnahme – auch am Test! Die Teilnahme ist für Sie als Oracle Partner selbstverständlich kostenfrei. Hier finden Sie weitere Informationen zum Oracle Partner Day und den Link zur Registrierung. Wir freuen uns auf Sie! Ihr Christian Werner Senior Director Alliances & Channels Germany P.S.: Direkt nach dem Oracle Partner Day findet der Oracle Day für Endkunden statt. Sie als Partner können gerne an dieser Veranstaltung gemeinsam mit Ihren Kunden teilnehmen, die Plätze sind limitiert. Hier finden Sie weitere Infos zum Oracle Day.

    Read the article

  • Oracle Partner Day 2012: Neue Geschäftschancen warten auf Sie!

    - by A&C Redaktion
    Wie gut kennen Sie die Neuerungen der Oracle "One Red Stack"-Unternehmensstrategie? In Zukunft werden Sie, als unser Partner, das gesamte Oracle Produktportfolio – Software, Hardware und Applications – verkaufen können! Ihr Profit: die neue Vielfalt. Toppen Sie Ihre Marktpräsenz! Erschließen Sie sich neue Märkte, neue Kundengruppen. Potenzieren Sie Ihren Erfolg in Zukunft! Maximize your Potential – das ist Ihr Stichwort für das Geschäftsjahr und unser Motto für den Oracle Partner Day am 29. Oktober 2012 in der Commerzbank Arena in Frankfurt. Erleben Sie in unseren Breakout Sessions, wo das Vertriebs-Plus für Sie liegt. Was Ihr Kunde wissen muss. Und wo Sie überzeugen können. Alle parallel laufenden Breakout Sessions werden wiederholt, damit Sie in jedem Fall daran teilnehmen können. Konzentriert auf Erfolg: das neue Oracle Alliances & Channel-Konzept Wir liefern Ihnen die entscheidenden Argumente für Kunden, die auf Nachhaltigkeit und Investitionssicherheit setzen. Für Sie, wenn Sie in Zukunft mehr erreichen wollen! Für alle, die Partner Excellence aus einer Hand anbieten können. Erfahren Sie die Produktneuheiten von der Oracle Open World (OOW) in San Francisco (30. September bis 4. Oktober 2012) aus erster Hand. In der Expert- und Partner Service-Zone finden Sie Antworten zu allen Themenschwerpunkten. Nutzen Sie dazu das neue Speed-Dating-Format, um schnell den richtigen Ansprechpartner für Ihre Fragen zu Vertrieb und Produkten zu finden. Machen Sie den Test. Wir zahlen die Testgebühr! Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt an zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day. Wählen Sie Ihre Fachbereiche Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Kommen Sie zum Oracle Partner Day 2012 – aktives Partner Networking, Management Kontakte und Expertenwissen inklusive! Sichern Sie sich jetzt einen der begehrten Plätze und Ihre Teilnahme – auch am Test! Die Teilnahme ist für Sie als Oracle Partner selbstverständlich kostenfrei. Hier finden Sie weitere Informationen zum Oracle Partner Day und den Link zur Registrierung. Wir freuen uns auf Sie! Ihr Christian Werner Senior Director Alliances & Channels Germany P.S.: Direkt nach dem Oracle Partner Day findet der Oracle Day für Endkunden statt. Sie als Partner können gerne an dieser Veranstaltung gemeinsam mit Ihren Kunden teilnehmen, die Plätze sind limitiert. Hier finden Sie weitere Infos zum Oracle Day.

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SQLAuthority News – Community Service and Public Speaking Engagements

    - by pinaldave
    Today is the last day of the year and I was going over my memories for year 2010. Almost all of them are good and I feel for sure better person in terms of knowledge, nature and overall human being. Looking back at the year, it is very satisfying as I was able to go out in public and help community out at various capacity. Thought, most of the time my contribution was as speaker, many times, I have reached out and helped organized event and worked at any capacity to get the event out. I have taken parts in many TechEds, PASS events, Virtual Tech Days, Various Community Events around the Globe and my contribution is not limited to my country only. Overall – I feel good to be part of this wonderful and supportive community. SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010 SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010 SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010 SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010 SQL SERVER – Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology SQLAuthority News – Feedback Received for Virtual Tech Days Sessions on Spatial Database SQLAuthority News – Community Tech Days, Ahmedabad – July 24, 2010 SQLAuthority News – SQL Data Camp, Chennai, July 17, 2010 – A Huge Success SQLAuthority News – 2 Sessions at TechInsight 2010 – June 29 – July 1, 2010 SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch SQLAuthority News – Professional Development and Community SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Opportunity of A Lifetime SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience SQLAuthority News – Author Visit Review – TechMela Nepal – March 29-30, 2010 SQLAuthority News – Excellent Event – TechEd Sri Lanka – Feb 8, 2010 SQLAuthority News – Hyderabad Techies February Fever Feb 11, 2010 – Indexing for Performance SQLAuthority News – MUGH – Microsoft User Group Hyderabad – Feb 2, 2010 Session Review SQLAuthority News – Ahmedabad Community Tech Days – Jan 30, 2010 – Huge Success For earlier year’s contribution you can check my webpage over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • A more elegant way of embedding a SOAP security header in Silverlight 4

    - by Your DisplayName here!
    The current situation with Silverlight is, that there is no support for the WCF federation binding. This means that all security token related interactions have to be done manually. Requesting the token from an STS is not really the bad part, sending it along with outgoing SOAP messages is what’s a little annoying. So far you had to wrap all calls on the channel in an OperationContextScope wrapping an IContextChannel. This “programming model” was a little disruptive (in addition to all the async stuff that you are forced to do). It seems that starting with SL4 there is more support for traditional WCF extensibility points – especially IEndpointBehavior, IClientMessageInspector. I never read somewhere that these are new features in SL4 – but I am pretty sure they did not exist in SL3. With the above mentioned interfaces at my disposal, I thought I have another go at embedding a security header – and yeah – I managed to make the code much prettier (and much less bizarre). Here’s the code for the behavior/inspector: public class IssuedTokenHeaderInspector : IClientMessageInspector {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderInspector(RequestSecurityTokenResponse rstr)     {         _rstr = rstr;     }       public void AfterReceiveReply(ref Message reply, object correlationState)     { }       public object BeforeSendRequest(ref Message request, IClientChannel channel)     {         request.Headers.Add(new IssuedTokenHeader(_rstr));                  return null;     } }   public class IssuedTokenHeaderBehavior : IEndpointBehavior {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderBehavior(RequestSecurityTokenResponse rstr)     {         if (rstr == null)         {             throw new ArgumentNullException();         }           _rstr = rstr;     }       public void ApplyClientBehavior(       ServiceEndpoint endpoint, ClientRuntime clientRuntime)     {         clientRuntime.MessageInspectors.Add(new IssuedTokenHeaderInspector(_rstr));     }       // rest omitted } This allows to set up a proxy with an issued token header and you don’t have to worry anymore with embedding the header manually with every call: var client = GetWSTrustClient();   var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Symmetric) {     AppliesTo = new EndpointAddress("https://rp/") };   client.IssueCompleted += (s, args) => {     _proxy = new StarterServiceContractClient();     _proxy.Endpoint.Behaviors.Add(new IssuedTokenHeaderBehavior(args.Result));   };   client.IssueAsync(rst); Since SL4 also support the IExtension<T> interface, you can also combine this with Nicholas Allen’s AutoHeaderExtension.

    Read the article

  • Fixing color in scatter plots in matplotlib

    - by ajhall
    Hi guys, I'm going to have to come back and add some examples if you need them, which you might. But, here's the skinny- I'm plotting scatter plots of lab data for my research. I need to be able to visually compare the scatter plots from one plot to the next, so I want to fix the color range on the scatter plots and add in a colorbar to each plot (which will be the same in each figure). Essentially, I'm fixing all aspects of the axes and colorspace etc. so that the plots are directly comparable by eye. For the life of me, I can't seem to get my scatter() command to properly set the color limits in the colorspace (default)... i.e., I figure out my total data's min and total data's max, then apply them to vmin, vmax, for the subset of data, and the color still does not come out properly in both plots. This must come up here and there, I can't be the only one that wants to compare various subsets of data amongst plots... so, how do you fix the colors so that each data keeps it's color between plots and doesn't get remapped to a different color due to the change in max/min of the subset -v- the whole set? I greatly appreciate all your thoughts!!! A mountain-dew and fiery-hot cheetos to all! -Allen

    Read the article

  • Reading data from an open HTTP stream

    - by allenjones
    Hi, I am trying to use the .NET WebRequest/WebResponse classes to access the Twitter streaming API here "http://stream.twitter.com/spritzer.json". I need to be able to open the connection and read data incrementally from the open connection. Currently, when I call WebRequest.GetResponse method, it blocks until the entire response is downloaded. I know there is a BeginGetResponse method, but this will just do the same thing on a background thread. I need to get access to the response stream while the download is still happening. This just does not seem possible to me with these classes. There is a specific comment about this in the Twitter documentation: "Please note that some HTTP client libraries only return the response body after the connection has been closed by the server. These clients will not work for accessing the Streaming API. You must use an HTTP client that will return response data incrementally. Most robust HTTP client libraries will provide this functionality. The Apache HttpClient will handle this use case, for example." They point to the Appache HttpClient, but that doesn't help much because I need to use .NET. Any ideas whether this is possible with WebRequest/WebResponse, or do I have to go for lower level networking classes? Maybe there are other libraries that will allow me to do this? Thx Allen

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • System locking up with suspicious messages about hard disk

    - by Chris Conway
    My system has started behaving strangely, intermittently locking up. I see messages like the following in syslog: Nov 18 22:22:00 claypool kernel: [ 3428.078156] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.078163] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.078167] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.078182] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.078184] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.078188] ata3.00: status: { DRDY } Nov 18 22:22:00 claypool kernel: [ 3428.080887] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.080890] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.080893] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.080905] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.080906] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.080910] ata3.00: status: { DRDY } And then this: Nov 18 23:13:56 claypool kernel: [ 6544.000798] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 18 23:13:56 claypool kernel: [ 6544.000804] ata1.00: failed command: FLUSH CACHE EXT Nov 18 23:13:56 claypool kernel: [ 6544.000814] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 23:13:56 claypool kernel: [ 6544.000815] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Nov 18 23:13:56 claypool kernel: [ 6544.000819] ata1.00: status: { DRDY } Nov 18 23:13:56 claypool kernel: [ 6544.000825] ata1: hard resetting link Nov 18 23:14:01 claypool kernel: [ 6549.360324] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:06 claypool kernel: [ 6554.008091] ata1: COMRESET failed (errno=-16) Nov 18 23:14:06 claypool kernel: [ 6554.008103] ata1: hard resetting link Nov 18 23:14:11 claypool kernel: [ 6559.372246] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:16 claypool kernel: [ 6564.020228] ata1: COMRESET failed (errno=-16) Nov 18 23:14:16 claypool kernel: [ 6564.020235] ata1: hard resetting link Nov 18 23:14:21 claypool kernel: [ 6569.380109] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:31 claypool kernel: [ 6579.460243] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 18 23:14:31 claypool kernel: [ 6579.486595] ata1.00: configured for UDMA/133 Nov 18 23:14:31 claypool kernel: [ 6579.486601] ata1.00: retrying FLUSH 0xea Emask 0x4 Nov 18 23:14:31 claypool kernel: [ 6579.486939] ata1.00: device reported invalid CHS sector 0 Nov 18 23:14:31 claypool kernel: [ 6579.486952] ata1: EH complete Nov 18 23:17:01 claypool CRON[3910]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Nov 18 23:17:01 claypool CRON[3908]: (CRON) error (grandchild #3910 failed with exit status 1) Nov 18 23:17:01 claypool postfix/sendmail[3925]: fatal: open /etc/postfix/main.cf: No such file or directory Nov 18 23:17:01 claypool CRON[3908]: (root) MAIL (mailed 1 byte of output; but got status 0x004b, #012) Nov 18 23:39:01 claypool CRON[4200]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) There are no messages marked after 23:39. When I next tried to use the machine, it would not return from the screensaver (blank screen), nor switch to another terminal, and I had to hard reboot it. [UPDATE] The output of smartctl is here. I had trouble getting this, because / is being mounted read-only (?!), which prevents most applications from running. Also, it may not be related, but I have the following worrying messages in dmesg: [ 10.084596] k8temp 0000:00:18.3: Temperature readouts might be wrong - check erratum #141 [ 10.098477] i2c i2c-0: nForce2 SMBus adapter at 0x600 [ 10.098483] ACPI: resource nForce2_smbus [io 0x0700-0x073f] conflicts with ACPI region SM00 [??? 0x00000700-0x0000073f flags 0x30] [ 10.098486] ACPI: This conflict may cause random problems and system instability [ 10.098487] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 10.098509] i2c i2c-1: nForce2 SMBus adapter at 0x700 [ 10.112570] Linux agpgart interface v0.103 [ 10.155329] atk: Resources not safely usable due to acpi_enforce_resources kernel parameter [ 10.161506] it87: Found IT8712F chip at 0x290, revision 8 [ 10.161517] it87: VID is disabled (pins used for GPIO) [ 10.161527] it87: in3 is VCC (+5V) [ 10.161528] it87: in7 is VCCH (+5V Stand-By) [ 10.161560] ACPI: resource it87 [io 0x0295-0x0296] conflicts with ACPI region ECRE [??? 0x00000290-0x000002af flags 0x45] [ 10.161562] ACPI: This conflict may cause random problems and system instability [ 10.161564] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [UPDATE 2] I swapped in a new SATA cable, per Phil's suggestion. The current output of smartctl is here, if it helps. [UPDATE 3] I don't think the cable fixed it. The system hasn't locked up yet, but my media player crashed a few minutes ago and I have the following in the syslog: Nov 20 16:07:17 claypool kernel: [ 2294.400033] ata1: link is slow to respond, please be patient (ready=0) Nov 20 16:07:47 claypool kernel: [ 2324.084581] ata1: COMRESET failed (errno=-16) Nov 20 16:07:47 claypool kernel: [ 2324.084588] ata1: limiting SATA link speed to 1.5 Gbps Nov 20 16:07:47 claypool kernel: [ 2324.084592] ata1: hard resetting link I get the following response from smartctl: $ sudo smartctl -a /dev/sda [sudo] password for chris: sudo: Can't open /var/lib/sudo/chris/0: Read-only file system smartctl 5.40 2010-03-16 r3077 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: /0:0:0:0 Version: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

    Read the article

  • Reverse proxy for a REST web service using ADFS/AD and WebApi

    - by Kai Friis
    I need to implement a reverse proxy for a REST webservice behind a firewall. The reverse proxy should authenticate against an enterprise ADFS 2.0 server, preferably using claims in .net 4.5. The clients will be a mix of iOS, Android and web. I’m completely new to this topic; I’ve tried to implement the service as a REST service using WebApi, WIF and the new Identity and Access control in VS 2012, with no luck. I have also tried to look into Brock Allen’s Thinktecture.IdentityModel.45, however then my head was spinning so I didn’t see the difference between it and Windows Identity Foundation with the Identity and Access control. So I think I need to step back and get some advice on how to do this. There are several ways to this, as far as I understand it. In hardware. Set up our Citrix Netscaler as a reverse proxy. I have no idea how to do that, however if it’s a good solution I can always hire someone who knows… Do it in the webserver, like IIS. I haven’t tried it; do not know if it will work. Create a web service to do it. 3.1 Implement it as a SOAP service using WCF. As I understand it ADFS do not support REST so I have to use SOAP. The problem is mobile device do not like SOAP, neither do I… However if it’s the best way, I have to do it. 3.2 Use Azure Access Control Service. It might work, however the timing is not good. Our enterprise is considering several cloud options, and us jumping on the azure wagon on our own might not be the smartest thing to do right now. However if it is the only options, we can do it. I just prefer not to use it right now. Right now I feel there are too many options, and I do not know which one will work. If someone can point me in the right directions, which path to pursue, I would be very grateful.

    Read the article

  • :contains for multiple words

    - by Emin
    I am using the following jQuery var etag='kate' if (etag.length > 0) { $('div').each(function () { $(this).find('ul:not(:contains(' + etag + '))').hide(); $(this).find('ul:contains(' + etag + ')').show(); }); }? towards the following HTML <div id="2"> <ul> <li>john</li> <li>jack</li> </ul> <ul> <li>kate</li> <li>clair</li> </ul> <ul> <li>hugo</li> <li>desmond</li> </ul> <ul> <li>said</li> <li>jacob</li> </ul> </div> <div id="3"> <ul> <li>jacob</li> <li>me</li> </ul> <ul> <li>desmond</li> <li>george</li> </ul> <ul> <li>allen</li> <li>kate</li> </ul> <ul> <li>salkldf</li> <li>3kl44</li> </ul> </div> basically, as long as etag has one word, the code works perfectly and hides those elements who do not contain etag. My problem is, when etag is multiple words (and I don't have control over it. Its coming from a database and could be combination of multiple words seperated with space char) then the code does not work.. is there any way to achieve this?

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

< Previous Page | 13 14 15 16 17 18  | Next Page >