Search Results

Search found 14713 results on 589 pages for 'release upgrade'.

Page 581/589 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • 12.04lts: no network internet

    - by dgermann
    Friends-- Cannot connect reliably to ethernet nor at all to Internet: Symptoms: About 2 weeks ago did an upgrade. Have not been able to connect to ethernet nor Internet. Today, for example, boot up this System76 laptop and there was no network connection. Did sudo mount -a and got some internal network connectivity: doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. 64 bytes from earth (192.168.0.201): icmp_req=1 ttl=64 time=0.160 ms 64 bytes from earth (192.168.0.201): icmp_req=2 ttl=64 time=0.177 ms 64 bytes from earth (192.168.0.201): icmp_req=3 ttl=64 time=0.159 ms ^C --- earth ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.159/0.165/0.177/0.013 ms doug@ubuntu:/sam$ ping doug2 PING doug (192.168.0.4) 56(84) bytes of data. ^C --- doug ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 1999ms doug@ubuntu:/sam$ ping sharon PING sharon (192.168.0.111) 56(84) bytes of data. 64 bytes from sharon (192.168.0.111): icmp_req=1 ttl=128 time=0.276 ms ^C --- sharon ping statistics --- 6 packets transmitted, 1 received, 83% packet loss, time 5031ms rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms doug@ubuntu:/sam$ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. ^C --- 192.168.0.1 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 4999ms doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. ^C --- earth ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4032ms doug@ubuntu:/sam$ ping yahoo.com ping: unknown host yahoo.com doug@ubuntu:/sam$ ping ubuntu.com ping: unknown host ubuntu.com doug@ubuntu:/sam$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 14 packets transmitted, 0 received, 100% packet loss, time 13103ms Note that earth is the cifs server, and one time pinging it worked, later failed. Clues: doug@ubuntu:/sam$ grep -i eth /var/log/syslog |tail Aug 23 15:32:46 ubuntu kernel: [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:32:48 ubuntu kernel: [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 Aug 23 15:34:51 ubuntu kernel: [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:34:55 ubuntu kernel: [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 Aug 23 15:36:56 ubuntu kernel: [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:37:00 ubuntu kernel: [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 Aug 23 15:39:01 ubuntu kernel: [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:39:03 ubuntu kernel: [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 Aug 23 15:41:06 ubuntu kernel: [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:41:13 ubuntu kernel: [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 doug@ubuntu:/sam$ ifconfig -a eth0 Link encap:Ethernet HWaddr [BLANKED] inet addr:192.168.0.7 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::21b:fcff:fe29:9dfc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3961 errors:0 dropped:0 overruns:0 frame:0 TX packets:2007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:991204 (991.2 KB) TX bytes:252908 (252.9 KB) Interrupt:16 Base address:0xec00 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2190 errors:0 dropped:0 overruns:0 frame:0 TX packets:2190 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:168052 (168.0 KB) TX bytes:168052 (168.0 KB) wlan0 Link encap:Ethernet HWaddr 00:19:d2:72:5a:0c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) doug@ubuntu:/sam$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. doug@ubuntu:/sam$ lsmod Module Size Used by des_generic 21191 0 md4 12523 0 nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat usb_storage 39646 1 dm_crypt 22528 1 joydev 17393 0 snd_hda_codec_analog 75395 1 snd_hda_intel 32719 2 pcmcia 39826 0 snd_hda_codec 109562 2 snd_hda_codec_analog,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec ip6t_LOG 16846 4 xt_hl 12465 6 ip6t_rt 12473 3 snd_pcm 80916 2 snd_hda_intel,snd_hda_codec nf_conntrack_ipv6 13581 7 nf_defrag_ipv6 13175 1 nf_conntrack_ipv6 ipt_REJECT 12512 1 ipt_LOG 12783 5 xt_limit 12541 12 xt_tcpudp 12531 21 xt_addrtype 12596 4 snd_seq_midi 13132 0 xt_state 12514 14 ip6table_filter 12711 1 ip6_tables 22528 3 ip6t_LOG,ip6t_rt,ip6table_filter nf_conntrack_netbios_ns 12585 0 nf_conntrack_broadcast 12541 1 nf_conntrack_netbios_ns nf_nat_ftp 12595 0 nf_nat 24959 1 nf_nat_ftp nf_conntrack_ipv4 19084 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13183 1 nf_nat_ftp nf_conntrack 73847 8 nf_conntrack_ipv6,xt_state,nf_conntrack_netbios_ns,nf_conntrack_broadcast,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18106 1 iptable_filter snd_rawmidi 25424 1 snd_seq_midi psmouse 86982 0 x_tables 22011 13 ip6t_LOG,xt_hl,ip6t_rt,ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,xt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables arc4 12473 2 r592 17808 0 snd_seq_midi_event 14475 1 snd_seq_midi memstick 15857 1 r592 yenta_socket 27465 0 serio_raw 13027 0 pcmcia_rsrc 18367 1 yenta_socket iwl3945 73186 0 pcmcia_core 21511 3 pcmcia,yenta_socket,pcmcia_rsrc iwl_legacy 71334 1 iwl3945 snd_seq 51592 2 snd_seq_midi,snd_seq_midi_event mac80211 436493 2 iwl3945,iwl_legacy snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq rfcomm 38139 0 bnep 17830 2 parport_pc 32114 0 bluetooth 158447 10 rfcomm,bnep ppdev 12849 0 cfg80211 178877 3 iwl3945,iwl_legacy,mac80211 asus_laptop 23693 0 sparse_keymap 13658 1 asus_laptop input_polldev 13648 1 asus_laptop nls_utf8 12493 6 cifs 258037 10 snd 62218 13 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd mac_hid 13077 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lp 17455 0 parport 40930 3 parport_pc,ppdev,lp i915 428418 3 firewire_ohci 40172 0 sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci firewire_core 56940 1 firewire_ohci crc_itu_t 12627 1 firewire_core r8169 56396 0 drm_kms_helper 45466 1 i915 drm 197641 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19115 1 i915 doug@ubuntu:/sam$ dmesg |grep eth [ 0.116936] i2c-core: driver [aat2870] using legacy suspend method [ 0.116939] i2c-core: driver [aat2870] using legacy resume method [ 1.453811] r8169 0000:03:07.0: eth0: RTL8169sb/8110sb at 0xf840ec00, [BLANKED], XID 10000000 IRQ 16 [ 1.453815] r8169 0000:03:07.0: eth0: jumbo features [frames: 7152 bytes, tx checksumming: ok] [ 25.681231] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 154.037318] r8169 0000:03:07.0: eth0: link down [ 154.037329] r8169 0000:03:07.0: eth0: link down [ 154.037596] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 155.583162] r8169 0000:03:07.0: eth0: link up [ 155.583366] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 156.637048] r8169 0000:03:07.0: eth0: link down [ 156.637066] r8169 0000:03:07.0: eth0: link down [ 156.637339] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 156.773699] r8169 0000:03:07.0: eth0: link down [ 156.773983] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 158.456181] r8169 0000:03:07.0: eth0: link up [ 158.456378] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 159.364468] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 162.384496] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=38877 PROTO=2 [ 166.272457] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 166.422333] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=40695 PROTO=2 [ 168.736049] eth0: no IPv6 routers present [ 183.572472] r8169 0000:03:07.0: eth0: link down [ 183.572490] r8169 0000:03:07.0: eth0: link down [ 183.572934] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 185.204801] r8169 0000:03:07.0: eth0: link up [ 185.205005] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 3620.680451] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3621.068431] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3624.912973] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9118 PROTO=2 [ 3631.088069] eth0: no IPv6 routers present [ 3703.062980] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3703.465330] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9210 PROTO=2 [ 3828.062951] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3833.617772] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9749 PROTO=2 [ 3953.062920] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3955.675129] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15983 PROTO=2 [ 4078.062922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4078.386319] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15997 PROTO=2 [ 4203.062899] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4203.559241] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16011 PROTO=2 [ 4328.062833] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4328.930922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16027 PROTO=2 [ 4453.062811] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4453.950224] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16039 PROTO=2 [ 4578.062742] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4580.626432] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=13738 PROTO=2 [ 4703.062704] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4706.310170] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15942 PROTO=2 [ 4828.062707] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4832.174324] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16505 PROTO=2 [ 4953.062628] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4961.469282] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16090 PROTO=2 [ 5078.062552] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5080.776462] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17239 PROTO=2 [ 5203.070394] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5205.358134] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17665 PROTO=2 [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 [ 5953.074429] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5961.925481] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=24261 PROTO=2 doug@ubuntu:/sam$ lspci -nnk |grep -iA2 eth 03:07.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller [10ec:8169] (rev 10) Subsystem: ASUSTeK Computer Inc. Device [1043:11e5] Kernel driver in use: r8169 doug@ubuntu:/sam$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 doug@ubuntu:/sam$ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Ifupdown (eth0)] ---------------------------------------------- Type: Wired Driver: r8169 State: connected Default: yes HW Address: [BLANKED] Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwl3945 State: disconnected Default: no HW Address: 00:19:D2:72:5A:0C Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ATT592: Infra, 30:60:23:76:FE:60, Freq 2437 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 doug@ubuntu:/sam$ nslookup ubuntu.com ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ dig ubuntuforums.org ; <<>> DiG 9.8.1-P1 <<>> ubuntuforums.org ;; global options: +cmd ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ sudo ifconfig eth0 up doug@ubuntu:/sam$ dhcpcd eth0 The program 'dhcpcd' can be found in the following packages: * dhcpcd * dhcpcd5 Try: sudo apt-get install <selected package> doug@ubuntu:/sam$ lspci -k 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 Kernel driver in use: i915 Kernel modules: intelfb, i915 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: leds-ss4200, iTCO_wdt, intel-rng 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: i2c-i801 02:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02) Subsystem: Intel Corporation PRO/Wireless 3945ABG Network Connection Kernel driver in use: iwl3945 Kernel modules: iwl3945 03:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev b3) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: yenta_cardbus Kernel modules: yenta_socket 03:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C552 IEEE 1394 Controller (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: firewire_ohci Kernel modules: firewire-ohci 03:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 17) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: sdhci-pci Kernel modules: sdhci-pci 03:01.3 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: r592 Kernel modules: r592 03:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10) Subsystem: ASUSTeK Computer Inc. Device 11e5 Kernel driver in use: r8169 Kernel modules: r8169 doug@ubuntu:/sam$ Things I have tried: sudo start network-manager: no help gksudo gedit /etc/network/interfaces changed line to iface eth0 inet dhcp: no help gksudo gedit /etc/NetworkManager/NetworkManager.conf, I changed managed=false to managed=true. Then sudo service network-manager restart: no help: network is unreachable sudo pkill -9 NetworkManager: no help gksudo gedit /etc/resolve.conf added line nameseriver 8.8.8.8: no help I know very little about networking; to date this has simply worked. Thanks for your help! :- Doug.

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • top tweets WebLogic Partner Community – October 2012

    - by JuergenKress
    Send your tweets @wlscommunity #WebLogicCommunity and follow us at http://twitter.com/wlscommunity WebLogic Community?@wlscommunity Real World Java EE Patterns by Adam Bien http://wp.me/p1LMIb-mp Markus Eisele?@myfear #JavaOne Content Available for Free https://blogs.oracle.com/java/entry/javaone_content_available_for_free … /via @java Adam Bien?@AdamBien Thought that 1h screencast is way too long to be popular. I was wrong. Lightweight Java EE is doing very well: http://www.adam-bien.com/roller/abien/entry/lightweight_java_ee_screencast … OracleBlogs?@OracleBlogs COLLABORATE 13 Call for Papers http://ow.ly/2szPuZ Oracle WebLogic?@OracleWebLogic New Blog Post: Data Source Security Part 1 http://ow.ly/2szFbv Markus Eisele?@myfear My Three Days at #JavaOne 2012 http://yakovfain.com/2012/10/04/my-three-days-at-javaone-2012/ … < nice writeup ;) Adam Bien?@AdamBien JavaOne 2012 Announcements And Surprises: NetBeans 7.3+ comes with HTML 5, JavaScript, CSS 3 support. JavaScript... http://bit.ly/Uy14eD Andrejus Baranovskis?@andrejusb OOW'12: Oracle ADF Implementations Around the Globe: Best Practices http://fb.me/1IVg6gzU0 gschmutz?@gschmutz Just published a blog with a wrap-up of my presentations at OOW 2012. https://guidoschmutz.wordpress.com/2012/10/07/my-presentations-at-oracle-open-world-2012/ … #oow2012 #trivadis Andrejus Baranovskis?@andrejusb OOW'12: Oracle Business Process Management/Oracle ADF Integration Best Practices http://fb.me/1GY3nz1lb WebLogic Community?@wlscommunity ExaLogic 2.01 ppt & training & Installation check-list & tips & Web tier roadmap http://wp.me/p1LMIb-mh Adam Bien?@AdamBien JavaOne 2012, First Feedback and The Strange Thing: NetBeans day was surprising well attended. A big room was fu... http://bit.ly/PwWwx8 OracleSupport_WLS?@weblogicsupport Free registration for our next webcast on setting up and using a #weblogic #cluster http://pub.vitrue.com/xWV8 WebLogic Community?@wlscommunity UKOUG Application Server & Middleware SIG Meeting http://wp.me/p1LMIb-mC Ronald Luttikhuizen?@rluttikhuizen Discussing future plans for Oracle Middleware Infrastructure Group with @simon_haslam @Jphjulstad and Rene van Wijk #oow @wlscommunity JAX London?@jaxlondon Be part of #JAXLondon- only 11 days to go! Still need a ticket? http://buff.ly/TUPKmL WebLogic Community?@wlscommunity ExaLogic X3-2 launched at OOW 2012 http://wp.me/p1LMIb-mM WebLogic Community?@wlscommunity @OracleEvents Dear Oracle Team thanks for promoting the WebLogic bootcamp, new schedules are online https://blogs.oracle.com/emeapartnerweblogic/resource/weblogic12c.htm … #weblogiccommunity OracleBlogs?@OracleBlogs Partner Webcast Introducing Oracle Business Activity Monitoring - 18 October 2012 http://ow.ly/2svzyz AMIS, Oracle & Java?@AMIS_Services Grant posted a nice little video on youtube about the #ADF EMG activities during Oracle Open World. http://youtu.be/qZhtBqnK-Zc GlassFish?@glassfish ADF Essentials - Available for free and certified on GlassFish!: If you are an Oracle customer, you are probably... http://bit.ly/UCtVwY OracleBlogs?@OracleBlogs WebLogic 12 hands-on bootcamps for partnersnew dates & locations http://ow.ly/2smOfs Pieter Kranenburg?@pskranenburg I'm EXA and I know IT! How about you? Go to http://bit.ly/OnSlDd and find out! (you might win an #iphone5 ;-) #OOW please RT Andrejus Baranovskis?@andrejusb Enabling WebLogic Administrator Group Inside Custom ADF Application http://fb.me/2d5SCeJ2g Michel Schildmeijer?@MNEMONIC01 I'm EXA and I know IT! How about you? Go to http://bit.ly/OnSlDd (you might win an #iphone5 ;-) #oow OracleSupport_WLS?@weblogicsupport Step-by-step instructions on how to configure mail Alerts in #OEM 11g for #WebLogic Servers up/down status http://pub.vitrue.com/KpZq Jeff West?@jeffreyawest Answer: Deliver JMS message to a single node in a Weblogic Cluster with a Distributed Topic http://stackoverflow.com/a/12396492/697114?stw=2 … Java?@java Bucharest Java User Group: Launched and Growing! #JUG http://ow.ly/dDnbN OracleSupport_WLS?@weblogicsupport Don't shoot the messenger! #Java source code analyzer @ http://pub.vitrue.com/Cy2J JAX London?@jaxlondon .@BrianGoetz gives in depth session on the details of how #Lambda expressions are implemented in the #Java language at #JAXLondon" ADF Community DE?@ADFCommunityDE Webcast ADFNewsSession: ADF as a basis of Fusion Apps - the biggest ADF project ever. Sep 14, 8:30 AM CET. Dial in https://blogs.oracle.com/jdevotnharvest/entry/adf_partner_community_news_session … OracleBlogs?@OracleBlogs WebLogic & Coherence & Cloud presentations for customer meetings http://ow.ly/1mqwrC Pieter Kranenburg?@pskranenburg Seminar: Oracle WebLogic 12c at Qualogy. You are invited! http://bit.ly/Ps9LDF Oracle WebLogic?@OracleWebLogic New Blog Post: Oracle OpenWorld Update -- General Session: Oracle Fusion Middleware Strategies Driving Business Inno... http://ow.ly/2stylf Oracle Cloud Zone?@OracleCloudZone New partner programs for Oracle Cloud Solutions http://bit.ly/PrVq5O #cloud #oow Lucas Jellema?@lucasjellema The strategy on Java - JEE, SE, ME, FX: http://technology.amis.nl/2012/10/02/javaone-2012-strategy-and-technical-keynote/ … #javaone #oow_amis WebLogic Community?@wlscommunity Send your #WebLogicCommunity #oow pictures and blog posts @wlscommunity or http://www.facebook.com/weblogiccommunity … Enjoy OOW ;-) WebLogic Community?@wlscommunity Become an WebLogic 12c expert, attend our partner bootcampshttps://blogs.oracle.com/emeapartnerweblogic/resource/weblogic12c.htm … #WebLogicCommunity #opn AMIS, Oracle & Java?@AMIS_Services Volgende #oracle #ADF training bij @AMIS_SERVICES is van 12 tot 16 november. Meer info of aanmelden? http://www.amis.nl/Trainingen/oracle-adf-11g-applicatieontwikkeling/ … Devoxx?@Devoxx ALL the Devoxx 2011 talks are now freely available on Parleys @ http://www.parleys.com/#st=4&id=102998 Pls RT! Adam Bien?@AdamBien Use the coupon code "PLUMA" and you will get 20% off for "Real World Java EE Patterns": http://realworldpatterns.com Lucas Jellema?@lucasjellema Very good summary of the #JavaOne Technical Keynote last night: http://java.dzone.com/articles/javaone-2012-javaone-technical … Arun Gupta?@arungupta Blogged: JavaOne 2012 Keynote and GlassFish Party Pictures: Some pictures from the keynote ... And som... http://bit.ly/ViH0ue Lucas Jellema?@lucasjellema Most recent promoted build for GassFish 4.0 (EE7) has WebSocket support: to play with: http://dlc.sun.com.edgesuite.net/glassfish/4.0/promoted/ … #javaone michael palmeter?@michaelpalmeter If you haven't seen the 5-minute Exalogic demo, you need to (do it now!) - http://lnkd.in/GRqy3x Lonneke Dikmans?@lonnekedikmans VENNSTER BLOG: Running EclipseLink DBWS 2.4.0 on GlassFish 3.1.2 http://blog.vennster.nl/2012/09/running-eclipselink-dbws-240-on.html?spref=tw … WebLogic Community?@wlscommunity WebLogic Partner Community Newsletter September 2012 http://wp.me/p1LMIb-mf WebLogic Community?@wlscommunity again again again&hellip;. it is Oracle Open World 2012 http://wp.me/p1LMIb-m6 Markus Eisele?@myfear #WebLogic and #JavaEE Roadmap and Strategy Session at OOW http://ow.ly/2slZEY /via @OracleWebLogic Adam Bien?@AdamBien An Article About Java EE Connector Architectures 1.6 (JCA 1.6): The free Java Magazine article: Java EE Connect... http://bit.ly/St6sxq Lucas Jellema?@lucasjellema ADF Essentials - free to develop and to deploy (I said: free!) - http://www.oracle.com/technetwork/developer-tools/adf/overview/adfessentials-1719844.html … AMIS, Oracle & Java?@AMIS_Services Blog by Lucas Jellema: "Develop and Deploy ADF applications – free of charge using the new ADF Essentials" http://bit.ly/StAhxY Andrejus Baranovskis?@andrejusb ADF Essentials - Quick Technical Review http://fb.me/2hKCXyF43 OracleBlogs?@OracleBlogs GlassFish Extension for Oracle JDeveloper http://ow.ly/2slIO8 Retweetet von WebLogic Community Oracle Eclipse?@OEPE New Tutorial: Using ADF Faces and ADF Controller with Oracle Enterprise Pack for Eclipse. #OEPE http://pub.vitrue.com/QoUg Simon Haslam?@simon_haslam As of the last day or two there's a new Java Products Media Pack on http://edelivery.oracle.com (rather than it being in FMW pack) WebLogic Community?@wlscommunity top tweets WebLogic Partner Community &ndash; September 2012 http://wp.me/p1LMIb-m2 Adam Bien?@AdamBien I was interviewed by OTN: http://www.oracle.com/technetwork/articles/java/jaxawards-1843595.html …See you at JavaOne! Oracle WebLogic?@OracleWebLogic DevOps Basics for #WebLogic: Track Down High CPU Thread with ps, top and the new JDK7 jcmd tool. Great blog @frankmuz. http://ow.ly/dOBM4 Simon Haslam?@simon_haslam Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Julien Ponge ?@jponge Just finished Java EE 6 + AngularJS samples for my upcoming middleware lectures. Code at https://github.com/jponge/todoapp-javaee6-angularjs … and https://github.com/jponge/todoapp-bosswatch … Markus Eisele?@myfear #Oracle #WebLogic is now totally #FREE for #Developer - more than just OTN license to develop the 1st prototype! http://bit.ly/SWltsR Markus Eisele?@myfear #WebSockets on #WebLogic Server http://ow.ly/1mv4QP by @wlsteve < need to give this a testdrive ;) OracleEnterpriseMgr?@oracle_em EM Blog : Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now ! #em12c http://pub.vitrue.com/mk7o OracleBlogs?@OracleBlogs ADF training material now on the iPad http://ow.ly/1mqz1Q GlassFish?@glassfish GlassFish grows by 50% in Software Stack Market Share Report for August 2012 by @Jelastic http://awe.sm/o4ZAp WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • Additional options in MDL

    - by Jane Zhang
        The Metadata Loader(MDL) enables you to populate a new repository as well as transfer, update, or restore a backup of existing repository metadata. It consists of two utilities: metadata export and metadata import. The export utility extracts metadata objects from a repository and writes the information into a file. The import utility reads the metadata information from an exported file and inserts the metadata objects into a repository.      While the Design Client provides an intuitive UI that helps you perform the most commonly used export and import tasks, OMBPlus scripting enables you to specify some additional options, and manage a control file that allows you to perform more specialized export and import tasks. Is it possible to utilize these options in MDL from Design Client? This article will tell you how to achieve it.      A property file named mdl.properties is used to configure the additional options. It stores options in name/value pairs. This file can be created and placed under the directory <owb installation path>/owb/bin/admin/. Below we will introduce the options that can be specified in the mdl.properties file. 1. DEFAULTDIRECTORY     When we open a Metadata Export/Import dialog in Design Client, a default directory is provided for MDL file and log file. For MDL Export, the default directory is <owb installation path>/owb/bin/. As for MDL Import, the default directory is <owb installation path>/owb/mdl/. It may not be the one you would want to use as a default. You can specify the option DEFAULTDIRECTORY in the mdl.properties file to set your own default directory for MDL Export/Import, for example, DEFAULTDIRECOTRY=/tmp/     In this example, the default directory is set to /tmp/. Be sure the value ends with a file separator since it represents a directory. In Windows, the file separator is “\”. In linux, the file separator is “/”. 2. MDLTRACEFILE     Sometimes we would like to trace the whole process of MDL Export/Import, and get detailed information about operations to help developers or supports troubleshooting. To turn on MDL trace, set the option MDLTRACEFILE in the mdl.properties file. MDLTRACEFILE=/tmp/mdl.trc    The right side of the equals sign is to specify the name of the file for MDL trace information to be written. If no path is specified, the file will be placed under directory <owb installation path>/owb/bin/admin/. However, the trace file may be large if the MDL file contains a large number of metadata objects, so please use this option sparingly. 3. CONTROLFILE       We can use a control file to specify how objects are imported or exported. We can set an option called CONTROLFILE in the mdl.properties file, so the control file can also be utilized in Design Client, for example, CONTROLFILE=/tmp/mdl_control_file.ctl     The control file stores options in name/value pairs. When using control file, be sure the file exists, otherwise an exception java.lang.Exception: CNV0002-0031(ERROR): Cannot find specified file will be thrown out during MDL Export/Import.      Next we will introduce some options specified in control file. ZIPFILEFORMAT     By default, MDL exports objects into a zip format file. This zip file has an .mdl extension and contains two files. For example, you export the repository metadata into a file called projects.mdl. When you unzip this MDL file, you obtain two files. The file projects.mdx contains the repository objects. The file mdlcatalog.xml contains internal information about the MDL XML file. Another choice is to combine these two files into one unzip text format file when doing MDL exporting.    In OMBPlus command related to MDL, there is an option called FILE_FORMAT which is used to specify the file format for the exported file. Its acceptable values are ZIP or TEXT. When the value TEXT is selected, the exported file is in text format, for example, OMBEXPORT MDL_FILE '/tmp/options_file_format_test.mdl' FILE_FORMAT TEXT FROM PROJECT 'MY_PROJECT'    How to achieve this via Design Client when doing an MDL exporting? Here we have another option called ZIPFILEFORMAT which has the same function as the FILE_FORMAT. The difference is the acceptable values for ZIPFILEFORMAT are Y or N. When the value is set to N, the exported file is in text format, otherwise it is in zip file format. LOGMESSAGELEVEL     Whenever you export or import repository metadata, MDL writes diagnostic and statistical information to a log file. Their are 3 types of status messages: Informational, Warning and Error. By default, the log file includes all types of message. Sometimes, user may only care about one type of messages, for example, they would like only error messages written to the log file. In order to achieve this, we can set an option called LOGMESSAGELEVEL in control file. The acceptable values for LOGMESSAGELEVEL are ALL, WARNING and ERROR. ALL: If the option LOGMESSAGELEVEL is set to ALL, all types of messages (Informational, Warning and Error) will be written into the log file. WARNING: If the option LOGMESSAGELEVEL is set to WARNING, only warning messages will be written into log file. ERROR: If the option LOGMESSAGELEVEL is set to ERROR, only error messages will be written into log file. UPDATEPROJECTATTRIBUTES, UPDATEMODULEATTRIBUTES      These two options are used to decide whether updating the attributes of projects/modules. The options work when projects/modules being imported already exist in repository and we use update metadata mode or replace metadata mode to do the MDL import. The acceptable values for these two options are Y or N. If the value is set to Y, the attributes of projects/modules will be updated, otherwise not.      Next, let’s give an example to see how these options take effect in MDL. 1. First of all, create the property file mdl.properties under the directory <owb installation path>/owb/bin/admin/. 2. Specify the options in the mdl.properties file, see the following screenshot. 3. Create the control file mdl_control_file.ctl under the directory /tmp/. Set the following options in control file. 4. Log into the OWB Design Client. 5. Create an Oracle module named ORA_MOD_1 under the project MY_PROJECT, then export the project MY_PROJECT into file my_project.mdl. 6. Check the trace file mdl.trc under the directory /tmp/. In this file, we can see very detail information for the above export task. 7. Check the exported MDL file. The file my_project.mdl is in text format. Opening the file, you can see the content of the file directly. It concats the file my_project.mdx and mdlcatalog.xml. 8. Modify the project MY_PROJECT and Oracle module ORA_MOD_1, add descriptions for them separately. Delete the location created in step 5. 9. Import the MDL file my_project.mdl. From the Metadata Import dialog, we can see the default directory for MDL file and log file has been changed to /tmp/. Here we use update metadata mode, match by names to do the importing. 10. After importing, check the description of the project MY_PROJECT, we can see the description is still there. But the description of the Oracle module ORA_MOD_1 has gone. That because we set the option UPDATEPROJECTATTRIBUTES to N, and set the option UPDATEMODULEATTRIBUTES to Y. 11. Check the log file, the log file only contains warning messages and the log message level is set to WARNING.      For more details about the 3 types of status messages, see Oracle® Warehouse Builder Installation and Administration Guide11g Release 2.

    Read the article

  • Part 2: Career development as a Software Developer without becoming a manager.

    - by albertpascual
    Seems like my previous post inspired by the work of Michael “Doc” Norton was a great success for the amount of emails I have received. Yet amazed how many people didn’t want to discuss their questions in the comments  sections. I would encourage people to be more public, still I would like to reply to all of you on this public media. I still welcome those emails. What I found out is that many people feels like me, they want to be developers and still be compensated for their experience without wanting to take a job as a manager. Their perfect day is a full day of coding and learning. Many believe their companies will never pay a manager’s salary to a developer no matter what. Most of you ask how to get the ball rolling. And is the later that I’m addressing here, the previous group, will never try. What companies understand developers value and where can I find them? This is a very difficult question to ask, I don’t have a list of those companies or departments, I have seen in my past signs in companies bending backwards to compensate, in more ways the monetary, a developer that is a good resource to them. Allowing the person to move out of the state and still let them work for the company from home is a sign that company goes by individual cases. Allowing them to go to conference that will not benefit the company is another big sign. Simple signs like flexible hours and letting some people work from home. To see those signs you need to be working in that company for awhile and look at the departments where the manager is taking care of their employees in individual cases. Look for the department where people get quiet extra perks, where some people in the department work from home or remotely. In my experience, but not always true, medium to big companies, are prompt to recognize good developers. Then again, some companies just don’t get it and is when you see many technical people managing developers. For all the people that email me stating that developers can also be very good managers, I do not disagree, I just think that a good developers loves writing code, when you remove that part the better salary isn’t enough to keep a developer happy. Burned out developers appreciate being promoted to managers. How do I know I work in a bad company? In my experience I have been a consultant and seen many companies, a few signs I have learned about companies that will not recognize good developers are: When the turn over is pretty high, when developers are moving out in a big rate, no rocket scientist needs to tap you in the shoulder. When the company is looking always to outsource their development resources. The product is not that interesting nor the company cares too much for their final result and support. Code sweat shops. You’ll know when you start working in one of those. Run for the hills! Where do I start? Disclaimer: I have only based this post on Michael “Doc” Norton, this is just my interpretation and ideas. First thing is to look at Michael “Doc” Norton presentation Take Control of Your Development Career http://docondev.blogspot.com/ That should be the first thing any developer should look and follow like it was a pattern. I would personally recommend to find some language or pattern you are interested with and learn it, learn something that will make you happy. Second, join a User Group and get involve in the community. There are hundreds of user groups, and I’m sure you’ll find one in your city or near you town. Code Camps are Developers Meet Ups are also good resources. Third, I would join a open source project you are interested or better yet, create a new open source project with the new technology that you have learn and get coding. Fourth, create a Twitter account and follow the people that talks about the technology you are interested on. If you follow this 4 steps above I think you’ll be on your way, after they are complete, when you release your Open Source project you can say that you accomplished the first steps. Now, do not expect anything to change in your career life, you are changing and should not expect anything in return, besides borrowing some time from sleeping and your family. Creating a good schedule may help you, I find wasted time in many places that I use. Flying for work is actually one of those that allows me to do my best work on a airplane, don’t need to borrow time from anywhere else. Making sure you always have a light, charged laptop is so important. Next steps following the Michael “Doc” Norton Pattern or my interpretation of. First, help run a user group or better yet, start a new user group. I’ll add, as well, go to one conference a year and free development events around your city; Code Camps, Geek Dinners, etc. There are many free events sponsored by different companies for developers to get to know their products, I highly recommend those as the way to get connected. Second, chose a mentor, this is a very hard thing to do I experienced, find an expert in the technology you are learning that has the time for you, it is difficult, I wish you best of luck. Third, learn another technology or pattern, open your horizons a little bit more. Why not, if you had fun previously, keep doing it. Fourth, get involved in forums to answer and ask questions, getting notice in public forums is rewarding for your ego after such a long journey. Final steps following the Michael “Doc” Norton Pattern Teach what you know, become humble on your knowledge, find as many opportunities to teach and to get involved with the community, bring all that to your day job. Mr. Norton talks about getting naked, expose yourself to others in your knowledge and what you do not know. You are never too important for small opportunities, yet don’t  be afraid to take anything big and learn from the experience. Anytime you have the opportunity to talk to somebody that has reach the point the community knows his or her name, means that you should learn from it. Take opportunities that won’t make you money, yet will make you happy. Sometimes you need to spend money and time. Register talks in Code Camps and Dev Meet Ups, those are free, also go to Conference, Development Summits and Geek Diners for example. One day, people will pay you to attend. When will all these pay off? I don’t know. I’m still in the path, there are a few things that during your journey you may get little acknowledgements that you are in the correct path. In my case I think those are the little signs that tells you about your journey. I got awarded the Microsoft Most Valuable Professional for ASP.NET in 2007, 2008, 2009 and 2010. I got selected to speak at the DevConnections in Las Vegas in 2010 and Orlando 2011. I do believe that I do have a long way to go, yet what I do makes me happy and I hope I can keep doing for years to come. Every year I can see an improvement on my code, and more frameworks and languages are under my belt, I learn to embrace them all as well as in my daily job, I have been able to work in a few projects beyond my department. I’m a learner and believer of the Michael “Doc” Norton pattern. Looking forward to learn more about it to be able to apply it better. In my short journey I now see my mistakes, I did a few things right, I have been listening the intelligent people and not being afraid to move along the technology changes. In my professional life, I have tried to avoid being placed in only one technology and product. I have always share my code and never confused anybody that wanted to take over any of my projects, I didn’t think anything I created as my own nor care too much when politics didn’t see my vision. I stayed flexible, ready and visible, yet humble. I keep my head just below the clouds, and avoided managers meetings. I credit my manager for my success, and I faulted publicly only myself for the failures. Hope this helps. Cheers, Al Follow me in Twitter  Read my previous post tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/09/part-2-career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • Udev webcam rule read, but not respected?

    - by user89305
    I have two usb-webcams on them machine, but at bot they some switch /dev/video number. The solution to this problem seems to be new udev rule. I have added this rule in/etc/udev/rules.d/jj-video.rules: Fix webcam 1 KERNEL=="video1", SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0001", SYMLINK+="webcam1" Fix webcam 2 KERNEL=="video2", SUBSYSTEM=="video4linux", ATTR{name}=="Logitech QuickCam Pro 3000", KERNELS=="0000:00:1d.0", SUBSYSTEMS=="pci", DRIVERS=="uhci_hcd", ATTRS{vendor}=="0x8086", ATTRS##{device}=="0x2658", SYMLINK+="webcam2" but the symlinks are not created. I have tried many different combinations in this file. The present ones are just my lates attempts. I found the parameters in: jjk@eee-old:~$ udevadm info -a -p $(udevadm info -q path -p /class/video4linux/video1) Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{name}=="Logitech QuickCam Pro 3000" ATTR{index}=="0" ATTR{button}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0': KERNELS=="2-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="Philips webcam" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 9" ATTRS{bNumEndpoints}=="02" ATTRS{bInterfaceClass}=="0a" ATTRS{bInterfaceSubClass}=="ff" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2': KERNELS=="2-2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 3" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="a0" ATTRS{bMaxPower}=="500mA" ATTRS{urbnum}=="371076" ATTRS{idVendor}=="046d" ATTRS{idProduct}=="08b0" ATTRS{bcdDevice}=="0002" ATTRS{bDeviceClass}=="00" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="8" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="2" ATTRS{devpath}=="2" ATTRS{version}==" 1.10" ATTRS{maxchild}=="0" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{serial}=="01402100A5000000" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2': KERNELS=="usb2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bMaxPower}==" 0mA" ATTRS{urbnum}=="34" ATTRS{idVendor}=="1d6b" ATTRS{idProduct}=="0001" ATTRS{bcdDevice}=="0302" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="1" ATTRS{devpath}=="0" ATTRS{version}==" 1.10" ATTRS{maxchild}=="2" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{manufacturer}=="Linux 3.2.0-29-generic uhci_hcd" ATTRS{product}=="UHCI Host Controller" ATTRS{serial}=="0000:00:1d.0" ATTRS{authorized_default}=="1" looking at parent device '/devices/pci0000:00/0000:00:1d.0': KERNELS=="0000:00:1d.0" SUBSYSTEMS=="pci" DRIVERS=="uhci_hcd" ATTRS{vendor}=="0x8086" ATTRS{device}=="0x2658" ATTRS{subsystem_vendor}=="0x1043" ATTRS{subsystem_device}=="0x82d8" ATTRS{class}=="0x0c0300" ATTRS{irq}=="23" ATTRS{local_cpus}=="ff" ATTRS{local_cpulist}=="0-7" ATTRS{dma_mask_bits}=="32" ATTRS{consistent_dma_mask_bits}=="32" ATTRS{broken_parity_status}=="0" ATTRS{msi_bus}=="" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS=="" jjk@eee-old:~$ And tested the setup: sudo udevadm --debug test /sys/class/video4linux/video1 main: runtime dir '/run/udev' run_command: calling: test adm_test: version 175 This program is for debugging only, it does not run any program, specified by a RUN key. It may show incorrect results, because some values may be different, or not available at a simulation run. parse_file: reading '/lib/udev/rules.d/40-crda.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-fuse.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-gnupg.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-hplip.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ia64.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-inputattach.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libgphoto2-2.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libsane.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ppc.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-usb_modeswitch.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-xserver-xorg-video-intel.rules' as rules file parse_file: reading '/lib/udev/rules.d/42-qemu-usb.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-firmware.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-udev-default.rules' as rules file parse_file: reading '/lib/udev/rules.d/55-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/56-hpmud_support.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-cdrom_id.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-pcmcia.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-alsa.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-input.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-serial.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-tape.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-v4l.rules' as rules file parse_file: reading '/lib/udev/rules.d/61-accelerometer.rules' as rules file parse_file: reading '/lib/udev/rules.d/64-xorg-xkb.rules' as rules file parse_file: reading '/lib/udev/rules.d/66-xorg-synaptics-quirks.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-cd-sensors.rules' as rules file add_rule: IMPORT found builtin 'usb_id', replacing /lib/udev/rules.d/69-cd-sensors.rules:76 parse_file: reading '/lib/udev/rules.d/69-libmtp.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xorg-vmmouse.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xserver-xorg-input-wacom.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-cd.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-printers.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-udev-acl.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-cd-aliases-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-net-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-persistent-net-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-probe_mtd.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-tty-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-ericsson-mbm.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-longcheer-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-nokia-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-pcmcia-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-platform-serial-whitelist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-qdl-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-simtech-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-usb-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-x22x-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-zte-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-nm-olpc-mesh.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-graphics-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-sound-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-drivers.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-mm-candidate.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-udisks.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-brltty.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hdparm.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hplj10xx.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-keyboard-configuration.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-regulatory.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-usbmuxd.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-restore.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-ucm.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-libgpod.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-pulseaudio.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-cd-devices.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keyboard-force-release.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keymap.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-udev-late.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-dell.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-fujitsu.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-gateway.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-ibm.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-lenovo.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-toshiba.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-csr.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-hid.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-wup.rules' as rules file parse_file: reading '/lib/udev/rules.d/97-bluetooth-hid2hci.rules' as rules file parse_file: reading '/etc/udev/rules.d/jj-video.rules' as rules file udev_rules_new: rules use 259284 bytes tokens (21607 * 12 bytes), 37913 bytes buffer udev_rules_new: temporary index used 67520 bytes (3376 * 20 bytes) udev_device_new_from_syspath: device 0x215103e0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_new_from_syspath: device 0x21510758 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_read_db: device 0x21510758 filled with db file data udev_device_new_from_syspath: device 0x21510e10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0' udev_device_new_from_syspath: device 0x21511b10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2' udev_device_new_from_syspath: device 0x215132f8 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2' udev_device_new_from_syspath: device 0x21513650 has devpath '/devices/pci0000:00/0000:00:1d.0' udev_device_new_from_syspath: device 0x21513980 has devpath '/devices/pci0000:00' udev_rules_apply_to_event: GROUP 44 /lib/udev/rules.d/50-udev-default.rules:29 udev_rules_apply_to_event: IMPORT 'v4l_id /dev/video1' /lib/udev/rules.d/60-persistent-v4l.rules:7 udev_event_spawn: starting 'v4l_id /dev/video1' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_VERSION=2' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_PRODUCT=Logitech QuickCam Pro 3000' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_CAPABILITIES=:capture:' spawn_wait: 'v4l_id /dev/video1' [2609] exit with return code 0 udev_rules_apply_to_event: IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-v4l.rules:9 builtin_usb_id: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0: if_class 10 protocol 0 udev_builtin_add_property: ID_VENDOR=046d udev_builtin_add_property: ID_VENDOR_ENC=046d udev_builtin_add_property: ID_VENDOR_ID=046d udev_builtin_add_property: ID_MODEL=08b0 udev_builtin_add_property: ID_MODEL_ENC=08b0 udev_builtin_add_property: ID_MODEL_ID=08b0 udev_builtin_add_property: ID_REVISION=0002 udev_builtin_add_property: ID_SERIAL=046d_08b0_01402100A5000000 udev_builtin_add_property: ID_SERIAL_SHORT=01402100A5000000 udev_builtin_add_property: ID_TYPE=generic udev_builtin_add_property: ID_BUS=usb udev_builtin_add_property: ID_USB_INTERFACES=:0aff00:010100:010200: udev_builtin_add_property: ID_USB_INTERFACE_NUM=00 udev_builtin_add_property: ID_USB_DRIVER=Philips webcam udev_rules_apply_to_event: LINK 'v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:10 udev_rules_apply_to_event: IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-v4l.rules:16 udev_builtin_add_property: ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 udev_builtin_add_property: ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 udev_rules_apply_to_event: LINK 'v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:17 udev_rules_apply_to_event: RUN 'udev-acl --action=$env{ACTION} --device=$env{DEVNAME}' /lib/udev/rules.d/70-udev-acl.rules:74 udev_rules_apply_to_event: LINK 'webcam1' /etc/udev/rules.d/jj-video.rules:2 udev_event_execute_rules: no node name set, will use kernel supplied name 'video1' udev_node_add: creating device node '/dev/video1', devnum=81:1, mode=0660, uid=0, gid=44 udev_node_mknod: preserve file '/dev/video1', because it has correct dev_t udev_node_mknod: preserve permissions /dev/video1, 020660, uid=0, gid=44 node_symlink: preserve already existing symlink '/dev/char/81:1' to '../video1' link_find_prioritized: found 'c81:2' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' udev_device_new_from_syspath: device 0x21516748 has devpath '/devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/video4linux/video2' udev_device_read_db: device 0x21516748 filled with db file data link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' link_update: creating link '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' to '/dev/video1' node_symlink: atomically replace '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-path\x2fpci-0000:00:1d.0-usb-0:2:1.0-video-index0' link_update: creating link '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '../../video1' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/webcam1' link_update: creating link '/dev/webcam1' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/webcam1' to 'video1' udev_device_update_db: created db file '/run/udev/data/c81:1' for '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' ACTION=add COLORD_DEVICE=1 COLORD_KIND=camera DEVLINKS=/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0 /dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0 /dev/webcam1 DEVNAME=/dev/video1 DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1 ID_BUS=usb ID_MODEL=08b0 ID_MODEL_ENC=08b0 ID_MODEL_ID=08b0 ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 ID_REVISION=0002 ID_SERIAL=046d_08b0_01402100A5000000 ID_SERIAL_SHORT=01402100A5000000 ID_TYPE=generic ID_USB_DRIVER=Philips webcam ID_USB_INTERFACES=:0aff00:010100:010200: ID_USB_INTERFACE_NUM=00 ID_V4L_CAPABILITIES=:capture: ID_V4L_PRODUCT=Logitech QuickCam Pro 3000 ID_V4L_VERSION=2 ID_VENDOR=046d ID_VENDOR_ENC=046d ID_VENDOR_ID=046d MAJOR=81 MINOR=1 SUBSYSTEM=video4linux TAGS=:udev-acl: UDEV_LOG=6 USEC_INITIALIZED=18213768 run: 'udev-acl --action=add --device=/dev/video1' jjk@eee-old:~$ (and correspondingly for video2) It looks to me like my rules are read, but not respected. What am I doing wrong?

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • OTN ???? ?????? ???????

    - by Yusuke.Yamamoto
    Database ?? Database ??????? Database ?????????? Java WebLogic Server/????????·???? SOA/BPM/????? ???????/???? ID??/?????? ?????EPM/BI EPM/BI ??????? EPM/BI ???? OS/??? ???? ????? MySQL Database ?? ???? ?? ????????? ??? ?? ORACLE MASTER??Master??ORACLE MASTER Bronze?Bronze DBA11g? ??(WMV)??(MP4)2011/6/22 ORACLE MASTER??Master??ORACLE MASTER Bronze?11g SQL??????(WMV)??(MP4)2011/3/9 ORACLE MASTER??Master??ORACLE MASTER Silver?Silver DBA11g???(WMV)??(MP4)2010/3/2 ORACLE MASTER??Master??ORACLE MASTER Silver?Silver DBA11g?[10g-11g???] ??(WMV)??(MP4)2012/4/23 ORACLE MASTER??Master??ORACLE MASTER Gold DBA11g ??(WMV)??(MP4)2011/2/23 ORACLE MASTER??Master??ORACLE MASTER Gold ?Gold DBA11g ????[??]??(WMV)??(MP4)2012/4/23 ORACLE MASTER??Master??30?????? ORACLE MASTER??????(WMV)??(MP4)2012/9/3 Oracle Database???????????????!Oracle??????????!???(WMV)??(MP4)2010/9/8 Oracle Database???????????????????!? Oracle?? ?????(WMV)??(MP4)2011/4/13 Oracle Database???????????????????!? Oracle?? ??????????(WMV)??(MP4)2011/4/20 Oracle Database???????????????????????????!? ??????????-?????(WMV)??(MP4)2012/2/20 Oracle Database???????????????????????????!? ??????????-?????(WMV)??(MP4)2012/2/20 Oracle Database????????????60???????!?????????????·???????? ??(WMV)??(MP4)2011/5/17 Oracle Database???Step by Step?????!? Oracle Database 11g -?????????-??(WMV)??(MP4)2009/12/17 Oracle Database???Step by Step?????!? OracleDatabase11g -???????????(WMV)??(MP4)2009/12/24 Oracle Database???Step by Step?????!? Oracle Database 11g -?????????(WMV)??(MP4)2009/12/10 Oracle Database DBA?????????????????!???????????(WMV)??(MP4)2010/12/21 Oracle Database DBA?????????????????!????????·??????????(WMV)??(MP4)2010/11/16 Oracle Database DBA?????????????????!???????·????????(WMV)??(MP4)2010/12/15 Oracle Database DBA?????????????????!???????????(WMV)??(MP4)2010/7/21 Oracle Database DBA?????????????????!?Export/Import?????(WMV)??(MP4)2010/9/8 Oracle Database DBA??????????????????!??????????????(WMV)??(MP4)2011/7/20 Oracle Database DBA?????????·????????!!????????!?????????SQL????????(WMV)??(MP4)2010/11/24 Oracle Database DBA?????????·?????????SQL????????????!SQL????????(WMV)??(MP4)2012/1/23 Oracle Database DBA?????????·????????!!???????·??????~DiskI/O?????????~??(WMV)??(MP4)2011/3/9 Oracle Database DBA?????????·????????!!???????·??????~SQL???????~??(WMV)??(MP4)2011/9/20 Oracle Database DBA?????????·????????!???????·??????-Statspack??-??(WMV)??(MP4)2010/7/28 Oracle Database DBA?????????·????????!!???????·??????~??????????~??(WMV)??(MP4)2010/8/4 Oracle Database DBA?????????·????????!!???????·??????-?????????-??(WMV)??(MP4)2010/7/14 Oracle Database DBA?????????·?????? ??!????????????? ??????(WMV)??(MP4)2010/7/7 Oracle Database DBA?????????·????????!????????????? ??????(WMV)??(MP4)2010/7/7 Oracle Database DBA?????????·????????!! ????????DB ??????Tips??(WMV)??(MP4)2010/8/5 Oracle Database DBA??????????!!??????~Oracle Database???~??(WMV)??(MP4)2010/8/31 Oracle Database DBA??????????!!??????~OracleDatabase????~??(WMV)??(MP4)2010/8/24 Oracle Database DBA???????????????????????????????????????(WMV)??(MP4)2010/4/27 Oracle Database DBA????????&?????????????·???? - ?????RMAN??????(WMV)??(MP4)2010/10/13 Oracle Database DBA????????&???????!!??????·????-???????????-??(WMV)??(MP4)2010/9/8 Oracle Database DBA????????&???????!??????·???? ~?????? VS RMAN ?????????~??(WMV)??(MP4)2012/1/23 Oracle Database DBA????????&???????!??????·???? ~????????????~??(WMV)??(MP4)2011/7/27 Oracle Database DBA????????&?????????????????-???????????????????(WMV)??(MP4)2010/10/6 Oracle Database Developer??????????????? Oracle SQL????(WMV)??(MP4)2010/10/20 Oracle Database Developer??????????????? Oracle PL/SQL????(WMV)??(MP4)2012/1/23 Oracle Database Developer????????!!PL/SQL????????(WMV)??(MP4)2011/03 Oracle Database Developer?????????????????????(WMV)??(MP4)2011/5/25 Oracle Database Developer??Java??java???!??(WMV)??(MP4)2009/11/26 Database ??????? ???? ?? ????????? ??? ?? DB???????????????!Oracle Database????????(WMV)??(MP4)2010/5/12 DB???????????????????? Oracle Enterprise Manager??(WMV)??(MP4)2012/1/23 DB??????????!Oracle Enterprise Manager???????????????? ??(WMV)??(MP4)2012/1/23 DB??????????!Oracle Enterprise Manager??????????????·?????? ??(WMV)??(MP4)2012/1/23 DB?????????????(UX)????????????????????(WMV)??(MP4)2011/4/6 DB???????????????????????????????Oracle Enterprise Manager??????!??(WMV)??(MP4)2012/1/23 DB???JP1??????????????????????!JP1???????????(WMV)??(MP4)2011/6/9 DB???JP1JP1???????!DB????????·??????!??(WMV)??(MP4)2011/1/12 DB???SAP"SAP on Oracle Database"???Tips??(WMV)??(MP4)2011/1/12 DB?????????????????! Oracle Database ????????(WMV)??(MP4)2012/1/23 DB?????????Web????????? ~????????~??(WMV)??(MP4)2010/3/10 DB?????????Web??????? ~????????????????????????~??(WMV)??(MP4)2010/2/3 DB??????????Oracle Database Upgrade?????(WMV)??(MP4)2011/9/20 DB??????????Oracle Database Client??????????????(WMV)??(MP4)2011/4/26 DB??????????Oracle Database 11g Release 2????????????????? ??(WMV)??(MP4)2012/1/23 DB?????????????!Oracle???????????????????????????(WMV)??(MP4)2011/1/18 DB???PL/SQLPL/SQL??????? ????(WMV)??(MP4)2012/1/23 DB???PL/SQLPL/SQL??????? ????(WMV)??(MP4)2012/1/23 DB???SQL DeveloperOracle SQL Developer????????????????(WMV)??(MP4)2012/1/23 DB???Jdeveloper??IDE Oracle JDeveloper??????????????????(WMV)??(MP4)2012/1/23 DB???APEX?????????!!APEX??????????????(WMV)??(MP4)2011/4/13 DB???APEX????!60??????Web??????????(WMV)??(MP4)2011/3/3 DB???APEXOracle??????????????! APEX4.0??????(WMV)??(MP4)2011/2/9 DB???APEX?????!????????!Oracle APEX???????????(WMV)??(MP4)2011/6/23 DB???Large Object??·???????DB????? -LOB???????-??(WMV)??(MP4)2010/2/4 DB???XMLOracle Database???????XML???????(WMV)??(MP4)2011/3/16 DB???XML??????XML?? - ?????! XML??? -??(WMV)??(MP4)2010/8/18 DB???XML??????XML?? - Oracle????XML -??(WMV)??(MP4)2010/8/25 DB???????SQL?????!Oracle Database????????Oracle Text???????(WMV)??(MP4) - DB???Oracle Data Guard??!Oracle Data Guard ????????????????????(WMV)??(MP4)2012/1/23 DB???Oracle Real Application Clusters??!?????????? ~RAC???~??(WMV)??(MP4)2012/1/23 DB???Oracle Real Application Clusters??!?????????? ~RAC??? ~??(WMV)??(MP4)2012/1/23 DB???Oracle Real Application Clusters??!Oracle RAC????????????????????????(WMV)??(MP4)2012/1/23 DB???Oracle Real Application Clusters??????!RAC????????????(WMV)??(MP4)2012/1/23 DB???Oracle Real Application Clusters?????????????!!60?????RAC????(WMV)??(MP4)2010/12/8 DB???????????????????????????!!?Oracle Database Firewall? ??(WMV)??(MP4)2012/1/23 DB?????????Oracle Database Firewall??????????? ??(WMV)??(MP4)2012/5/14 DB?????????????????????????????????????????(WMV)??(MP4)2011/5/11 DB??????????????????????????????????????(WMV)??(MP4)2011/4/19 DB????????????????! ???????????????(WMV)??(MP4)2012/1/23 DB?????????????????????????????????(WMV)??(MP4)2010/11/16 DB???????????????????????????????????????????????(WMV)??(MP4)2011/2/9 DB???????????????~???????~??(WMV)??(MP4)2010/12/22 DB???????????????~????/????????~??(WMV)??(MP4)2011/5/24 DB??????Oracle VM 3.0 ???????(WMV)??(MP4)2011/10/3 DB??????????????BCP/BCM???Oracle??????????(WMV)??(MP4)2011/7/13 DB???????????????????????????????????????(WMV)??(MP4)2011/7/12 DB????????DB??????????!??????? Oracle ????????????? ??(WMV)??(MP4)2012/1/23 DB??????????????????!?????????????????(WMV)??(MP4)2011/6/22 DB??????????????????????(WMV)??(MP4)2011/10/17 DB???????????????????????????(WMV)??(MP4)2011/10/17 DB????????????????~?????????????????IT????~??(WMV)??(MP4)2009/12/22 DB??????????!???????? ~????????????????~??(WMV)??(MP4)2011/11/1 DB???????????????????? -???????????(WMV)??(MP4)2011/6/21 DB????????20?????? Oracle GoldenGate??(WMV)??(MP4)2012/4/23 DB????????Oracle GoldenGate?????????????(WMV)??(MP4)2012/5/14 DB???????????????????????????!Oracle GoldenGate??????(WMV)??(MP4)2011/11/1 DB???????????????????!GoldenGate????DB?????????(WMV)??(MP4)2011/8/24 DB????????????????????????? Oracle GoldenGate ????????????! ??(WMV)??(MP4)2012/1/23 DB???????? ??????????????????????????????(WMV)??(MP4)2011/03 DB???????????????!! Oracle Data Integrator??????????(WMV)??(MP4)2009/12/17 DB??????????!??????????????????????(WMV)??(MP4)2011/4/6 DB???????????????????????Oracle Database ????? ????(WMV)??(MP4)2012/1/23 DB???????????????????????Oracle Database ????? ????(WMV)??(MP4)2012/1/23 DB????????????!??????·??????????????????(WMV)??(MP4)2012/1/23 DB???Oracle Partitioning??!????????????????? ????(WMV)??(MP4)2012/1/23 DB???Oracle Partitioning??!????????????????? ????(WMV)??(MP4)2012/1/23 DB?????????????????!SQL?????????? ??(WMV)??(MP4)2010/12/21 DB???Exadata20?????? Oracle Exadata??(WMV)??(MP4)2012/4/23 DB???ExadataOracle Exadata??????????????????(WMV)??(MP4)2012/5/14 DB???Exadata????!Oracle Exadata????? ??(WMV)??(MP4)2011/10/17 DB???ExadataOracle Exadata????????????????????????? ??(WMV)??(MP4)2012/1/23 DB?????????DB?????DB????????????????? -Oracle TimesTen ????-??(WMV)??(MP4)2012/1/23 DB?????????DBWeb????????????!????????????????(WMV)??(MP4)2010/11/4 DB????????DBA?"???????" ????????????????(WMV)??(MP4)2010/8/25 DB?????????????!?Oracle ASM??????????????(WMV)??(MP4)2011/7/8 DB????????Oracle ASM ? Oracle Clusterware ??????????? ??(WMV)??(MP4)2012/1/23 DB????????????????????????????????? - Oracle ASM Cluster File System (ACFS)????! ??(WMV)??(MP4)2012/1/23 DB??????????????????/????·????????Flashback Database with SSD???(WMV)??(MP4)2011/10/17 DB???????????????????DB??????~RAC VM with SSD??(WMV)??(MP4)2011/1/11 DB????????Oracle???????????? SSD?????!??(WMV)??(MP4)2010/8/11 DB??????????????NAS??????!Oracle Database?I/O???????NFS????????????SSD?????????????(WMV)??(MP4)2012/1/23 DB????????????! ???????????? ~????·???????????????~??(WMV)??(MP4)2009/3/25 DB??????????!???????????????????(WMV)??(MP4)2011/3/15 DB???????????????????????!??????·?????????(WMV)??(MP4)2010/6/23 Windows/.Net?????????Oracle on Windows-???? OVM,Hyper-V????(WMV)??(MP4)2011/4/13 Windows/.Net????????Windows Server?Oracle?????!??(WMV)??(MP4)2010/5/19 Windows/.Net????????Oracle on Windows - ??????&???? ?????(WMV)??(MP4)2011/4/20 Windows/.Net??????.Net.NET + Oracle Database ??????????????????????(WMV)??(MP4)2012/1/23 Windows/.Net??????.Net.NET????????Oracle Database ??(WMV)??(MP4)2011/1/20 Windows/.Net??????.NetOracle on Windows-.NET+Oracle ???????(WMV)??(MP4)2011/6/28 Windows/.Net??????.NetVB6????.NET? ~DB????????????~??(WMV)??(MP4)2010/8/4 Windows/.Net??????.Net.NET+Oracle ???????????????????(WMV)??(MP4)2011/10/3 Windows/.Net??????Active Directory30????!Active Directory+Oracle??(WMV)??(MP4)2010/9/8 Windows/.Net??????AccessAccess????WEB?????????????????????????(WMV)??(MP4)2011/7/20 Windows/.Net??????Oracle Real Application ClustersWindows?RAC??!????????????(WMV)??(MP4)2010/9/1 Windows/.Net????????????Oracle on Windows ~???????~ ??(WMV)??(MP4)2011/1/18 Windows/.Net????????????Oracle on Windows ~???????~ ??(WMV)??(MP4)2011/1/20 Windows/.Net???????????MSCS????!?Windows+Oracle????????(WMV)??(MP4)2010/8/4 ???????11gR2???????!Oracle DB 11g???????/??????(WMV)??(MP4)2011/4/14 ???????11gR2???! Oracle Database 11g R2 ?????????(WMV)??(MP4)2010/11/17 ???????11gR2DB??????·??????????11g R2?????(WMV)??(MP4)2010/9/15 ????????????????DWH????????????????·??????????(WMV)??(MP4)2010/11/25 ????????????????DWH????????????????·??????????(WMV)??(MP4)2010/11/25 ????????????????DWH????????????????·??????????(WMV)??(MP4)2010/11/25 Database ?????????? ???? ?? ????????? ??? ?? Oracle Master Platinum??Oracle Real Application Clusters?Platinum???????Platinum???!?????? Oracle RAC ?????????(WMV)??(MP4)2010/1/26 Oracle Master Platinum??????????Platinum??????? Platinum???!???????Oracle??????????????(WMV)??(MP4)2010/4/21 Oracle Master Platinum????????Platinum??:?????????????????????(WMV)??(MP4)2010/5/26 Oracle Master Platinum????????·?????Platinum??????? Platinum???! ????????????·?????????(WMV)??(MP4)2010/3/9 ????????????????????????????!?????????&?????????(WMV)??(MP4)2012/1/23 ????????????????????????????!SQL????????? ??? Part1&2??(WMV)??(MP4)2010/10/12 ????????????????????????????!SQL????????? ??? Part3 ??(WMV)??(MP4)2010/10/19 ????????????????????????????!SQL????????? ??? Part4 ??(WMV)??(MP4)2011/1/27 ????????????????????????????!SQL????????? ??? Part5 ??(WMV)??(MP4)2011/1/27 ????????????????????????????!?????????????????????(WMV)??(MP4)2012/1/23 ????????????????????????????!????????? Part1 ??(WMV)??(MP4) - ????????????????????????????!????????? Part2 ??(WMV)??(MP4)2011/7/26 ????????????????????????????!????????? Part3 ??(WMV)??(MP4)2010/4/28 ????????????????????????????!??????? Part1 ??(WMV)??(MP4)2011/11/1 ????????????????????????????????????????? Part2 ??(WMV)??(MP4)2012/5/28 ????????????????????????????!???????????????????????????? ??(WMV)??(MP4)2012/1/23 ??????????????????????????!??????? Part1 ??(WMV)??(MP4)2011/2/10 ??????????????????????????!??????? Part2 ??(WMV)??(MP4)2011/3/23 ??????????????????????????!??????? Part3 ??(WMV)??(MP4)2011/4/26 ??????????????????????????!??????? Part4 ??(WMV)??(MP4)2011/5/26 ???????????Exadata???????????!Exadata???????????????????Tips??(WMV)??(MP4)2012/1/23 ?????????????????DB???????????!??TimesTen?????????? ??(WMV)??(MP4)2012/1/23 ???????????????????????????!GoldenGate?????????????????????(WMV)??(MP4)2012/1/23 ???????????EDA/CEP???????????!Oracle CEP?????????·?????????????(WMV)??(MP4)2012/1/23 ????????????????????????????????!???????????????????(WMV)??(MP4)2011/2/15 ???????????????????????????????RAC ????????????????(WMV)??(MP4)2012/1/23 ????????????????????????????????!Oracle Net ??????????????(WMV)??(MP4)2012/1/23 ?????????????????????????????:???????????????0??????(WMV)??(MP4)2010/5/19 ???????????????????????????!???????????????????????(WMV)??(MP4)2012/1/23 ?????????Oracle Real Application Clusters????????????!RAC????????????????????(WMV)??(MP4)2011/3/1 ???????Core Tech Oracle Database Core Tech SeminarOracle Data Guard,Oracle Recovery Manager(RMAN),Flashback??(WMV)??(MP4)2012/5/14 ???????Core Tech Oracle Database Core Tech SeminarOracle Real Application Clusters,Oracle Clusterware,Oracle Automatic Storage Management??(WMV)??(MP4)2012/5/14 ???????Big Data Appliance?????????????????????(WMV)??(MP4)2012/5/14 ???????Oracle Real Application ClustersRAC????10??!US Oracle??????????????Oracle Real Application Clusters????????????(WMV)??(MP4)2012/2/20 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ???????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ???????/?????????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c ???????/????????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ??????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ??????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ????????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ????????? ????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c Exadata?????(WMV)??(MP4)2012/1/23 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager 12c ???????????(WMV)??(MP4)2012/2/6 ???????Oracle Enterprise Manager 12cOracle Enterprise Manager???????????·?????????????(WMV)??(MP4)2012/5/14 ???????Database Appliance???????????1Box?????2???????? Oracle Database Appliance ??????(WMV)??(MP4)2011/12/19 ???????Database ApplianceOracle Database Appliance????????·????????(WMV)??(MP4)2012/5/14 ???????Oracle Data MiningOracle DB????!????????????(WMV)??(MP4)2010/9/14 ???????Oracle Data MiningOracleDB????????????(WMV)??(MP4)2011/6/29 OracleDirect ?????????????????!?????·???????ABC -Oracle Database???(WMV)??(MP4)2012/3/5 OracleDirect ???????????????????????-SE·EE??????-??(WMV)??(MP4)2010/5/19 OracleDirect ?????????????Oracle Database EE?SE???????????!???(WMV)??(MP4)2010/2/25 OracleDirect ????????????????98(????)???Oracle Database?????????! ~?????????????Oracle Database?????!~??(WMV)??(MP4)2009/12/2 OracleDirect ????????????!! Oracle Database????????(WMV)??(MP4)2010/10/13 OracleDirect ?????SQL?????????SQL?????????!SQL?????(WMV)??(MP4)2011/4/12 ???????????ACE????? ??Oracle Database???????(WMV)??(MP4)2012/5/14 ???????????????????????????????????????????? ??(WMV)??(MP4)2012/1/23 ????????????????!?????????????????·???????????? ????(WMV)??(MP4)2012/1/23 ????????????????!?????????????????·???????????? ????(WMV)??(MP4)2012/1/23 Java ???? ?? ????????? ??? ?? Java??Java EEJava EE 6 ??(132page)??(WMV)??(MP4)2011/04 Java?????!???????Java?????????(WMV)??(MP4)2011/06 Java??Java???????·???????????(WMV)??(MP4)2012/01 Java??Oracle ???? Java ??????? ??(WMV)??(MP4)2011/03 WebLogic Server/????????·???? ???? ?? ????????? ??? ?? WebLogic Server????Oracle????????WebLogic ????(WMV)??(MP4)2012/1/23 WebLogic Server????:????????? FastSwap??????·???????(??) ??(WMV)??(MP4)- WebLogic Server????:???????????????????(??)??(WMV)??(MP4)- WebLogic Server????:????????? ?????????????????????·??????????????????????(CAT)(??) ??(WMV)??(MP4)- WebLogic Server????:????????? ???????????????????????:????????????????????(??) ??(WMV)??(MP4)- WebLogic Server????:????????? JRockit Mission Control(??)??(WMV)??(MP4)- WebLogic Server????:????????? JRockit Flight Recorder????WebLogic????????????(??)??(WMV)??(MP4)- WebLogic Server????:????????? ?????????????? ???????????(??)??(WMV)??(MP4)- WebLogic Server????/???????????????????????????????????WebLogic????????(WMV)??(MP4)2011/3/24 WebLogic Server????WebLogic Server?JDBC??????????(WMV)??(MP4)2010/6/17 WebLogic Server????Oracle WebLogic Server???????Web??????? -???-??(WMV)??(MP4)2010/2/17 WebLogic Server????????????????? WebLogic Server ?????????????? ??(WMV)??(MP4)2012/1/23 WebLogic Server????????????????!WebLogic Scripting Tool?????WLS???·??????(WMV)??(MP4)2012/1/23 WebLogic Server????????????????????????~Oracle WebLogic Server 11g~??(WMV)??(MP4)2010/2/10 WebLogic Server????????????????!EM???WebLogic?????(WMV)??(MP4)2010/5/27 WebLogic Server ?????????????????WebLogic Server???????????????(WMV)??(MP4)2010/3/24 WebLogic Server????Oracle???????????????????????·????!??(WMV)??(MP4)2011/5/26 WebLogic Server??·????????OracleAS???????WebLogic Server??????????(WMV)??(MP4)2010/4/22 WebLogic ServerExalogicOracle Exalogic Elastic Cloud ?? ~ Exalogic ??? ~??(WMV)??(MP4)2012/1/23 JRockit??JVM JRockit?? ??Update??(WMV)??(MP4)2011/03 CoherenceOracle Coherence ?????·????????????????(WMV)??(MP4)2012/1/23 CoherenceOracle Coherence ????????????????(WMV)??(MP4)2012/1/23 Coherence???????????!???!Oracle Coherence?????????????????????????(WMV)??(MP4)2012/1/23 Coherence????????????Coherence??????(WMV)??(MP4)2011/04 SOA/BPM/????? ???? ?? ????????? ??? ?? BPM???????????BPM?????????????? ??(WMV)??(MP4)2011/04 BPMBPM Suite 11g??????????????????(WMV)??(MP4)2011/03 CEP??????????????????????????CEP????????(WMV)??(MP4)2011/04 ????????? ???? ?? ????????? ??? ?? ?????????????????????!???·????????????(WMV)??(MP4)2010/5/25 ???????Notes??????????????(WMV)??(MP4)2010/5/20 ???????Notes??13?????????????????????!??(WMV)??(MP4)2010/4/20 ??????????????????????Notes?????????????(WMV)??(MP4)2010/3/17 ???????Mashup Award5 ????????????????????????????????·?????????(WMV)??(MP4)2010/2/23 ID??/?????? ???? ?? ????????? ??? ?? ID????????????????!!~OracleDB?????????????????????????(WMV)??(MP4)2012/1/23 ID?????????????!????ID????????????(WMV)??(MP4)2010/6/15 ID???????????!????DB?OS?????/???????????(WMV)??(MP4)2010/1/27 ??????????/???????????????·???????????(WMV)??(MP4)2011/04 ?????????????~???????????????????(WMV)??(MP4)2011/4/5 ??????????!??ID·??????????????????(WMV)??(MP4)2010/12/7 ???????????????·???????????(WMV)??(MP4)2010/6/23 ?????EPM/BI EPM/BI ??????? ???? ?? ????????? ??? ?? ???BI????????????BI?????~5W1H1T?~??(WMV)??(MP4)2010/3/17 ???BI????????????BI?????~?????????~??(WMV)??(MP4)2010/2/24 ???????BI?????????? -Evidence-based Management- ??????????(WMV)??(MP4)2010/2/18 ???BI????????????BI?????~???KPI?~??(WMV)??(MP4)2010/1/28 EPM/BI ???? ???? ?? ????????? ??? ?? ??BIEE?????????????????(WMV)??(MP4)2010/3/10 OS/??? ???? ?? ????????? ??? ?? ???Solaris??????Oracle Solaris??????(WMV)??(MP4)2010/10/14 ???SolarisSolaris 10 ?? ~????Solaris???~??(WMV)??(MP4)2010/9/14 ???ZFSZFS ???! ZFS ???????(???)??(WMV)??(MP4)2011/11/21 ???ZFSZFS ???! ??????????????????(WMV)??(MP4)2010/9/28 ???LinuxOracle Linux?Unbreakable Enterprise Kernel?????(WMV)??(MP4)2011/11/21 ???LinuxOracle Linux Unbreakable Enterprise Kernel?????????(WMV)??(MP4)2012/5/14 ???Linux??????Oracle?????????Linux????(WMV)??(MP4)2010/5/25 ????????????????????????????????????(WMV)??(MP4)2012/1/6 ???????SolarisSolaris: ??????????????? ??(WMV)??(MP4)2011/1/27 ???????SolarisOracle Solaris 11????????????????? ??(WMV)??(MP4)2012/5/14 ???????SolarisSolaris ? DTrace ?????????????(WMV)??(MP4)2010/9/21 ???SolarisOracle Solaris 11 ??????????????-IPS ??????? ??(WMV)??(MP4)2012/5/14 ???SolarisSolaris ?????????????????????????????? ??(WMV)??(MP4)2012/5/14 ???ZFSZFS?Oracle UCM????????????? ??(WMV)??(MP4)2011/12/19 ???? ???? ?? ????????? ??? ?? ???SPARCSPARC ????? ~ OVM ???????!??(WMV)??(MP4)2011/12/5 ????? ???? ?? ????????? ??? ?? ???SAN????????????? Pillar Axiom 600 ???? ??(WMV)??(MP4)2012/4/23 ???ZFSOracleDB????SunStorage7000?????(WMV)??(MP4)2010/9/9 ????????!??????????????????????(WMV)??(MP4)2012/2/6 ???ZFS??S7000???:S7000????????????(WMV)??(MP4)2011/12/5 MySQL ???? ?? ????????? ??? ?? MySQL????MySQL????MySQL?????? ????????(WMV)??(MP4)2011/7/25 MySQL???MySQL??MySQL?? ?????(WMV)??(MP4)2012/1/23 MySQL???MySQL??MySQL?? ?????(WMV)??(MP4)2012/5/28 MySQL???MySQL??MySQL?? ???????(WMV)??(MP4)2012/6/25 MySQL???MySQL??MySQL???????(WMV)??(MP4)2011/7/25 MySQL????????????????MySQL ???????????????(WMV)??(MP4)2012/1/23 MySQL???MySQL Cluster MySQL Cluster ??????(WMV)??(MP4)2012/2/6 MySQL???MySQL Cluster MySQL Cluster 7.2 ??????(WMV)??(MP4)2012/3/19 MySQL??????? MySQL ????????(WMV)??(MP4)2012/2/6

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • LWJGL SlickUtil Texture Binding

    - by Matthew Dockerty
    I am making a 3D game using LWJGL and I have a texture class with static variables so that I only need to load textures once, even if I need to use them more than once. I am using Slick Util for this. When I bind a texture it works fine, but then when I try to render something else after I have rendered the model with the texture, the texture is still being bound. How do I unbind the texture and set the rendermode to the one that was in use before any textures were bound? Some of my code is below. The problem I am having is the player texture is being used in the box drawn around the player after it the model has been rendered. Model.java public class Model { public List<Vector3f> vertices = new ArrayList<Vector3f>(); public List<Vector3f> normals = new ArrayList<Vector3f>(); public ArrayList<Vector2f> textureCoords = new ArrayList<Vector2f>(); public List<Face> faces = new ArrayList<Face>(); public static Model TREE; public static Model PLAYER; public static void loadModels() { try { TREE = OBJLoader.loadModel(new File("assets/model/tree_pine_0.obj")); PLAYER = OBJLoader.loadModel(new File("assets/model/player.obj")); } catch (Exception e) { e.printStackTrace(); } } public void render(Vector3f position, Vector3f scale, Vector3f rotation, Texture texture, float shinyness) { glPushMatrix(); { texture.bind(); glColor3f(1, 1, 1); glTranslatef(position.x, position.y, position.z); glScalef(scale.x, scale.y, scale.z); glRotatef(rotation.x, 1, 0, 0); glRotatef(rotation.y, 0, 1, 0); glRotatef(rotation.z, 0, 0, 1); glMaterialf(GL_FRONT, GL_SHININESS, shinyness); glBegin(GL_TRIANGLES); { for (Face face : faces) { Vector2f t1 = textureCoords.get((int) face.textureCoords.x - 1); glTexCoord2f(t1.x, t1.y); Vector3f n1 = normals.get((int) face.normal.x - 1); glNormal3f(n1.x, n1.y, n1.z); Vector3f v1 = vertices.get((int) face.vertex.x - 1); glVertex3f(v1.x, v1.y, v1.z); Vector2f t2 = textureCoords.get((int) face.textureCoords.y - 1); glTexCoord2f(t2.x, t2.y); Vector3f n2 = normals.get((int) face.normal.y - 1); glNormal3f(n2.x, n2.y, n2.z); Vector3f v2 = vertices.get((int) face.vertex.y - 1); glVertex3f(v2.x, v2.y, v2.z); Vector2f t3 = textureCoords.get((int) face.textureCoords.z - 1); glTexCoord2f(t3.x, t3.y); Vector3f n3 = normals.get((int) face.normal.z - 1); glNormal3f(n3.x, n3.y, n3.z); Vector3f v3 = vertices.get((int) face.vertex.z - 1); glVertex3f(v3.x, v3.y, v3.z); } texture.release(); } glEnd(); } glPopMatrix(); } } Textures.java public class Textures { public static Texture FLOOR; public static Texture PLAYER; public static Texture SKYBOX_TOP; public static Texture SKYBOX_BOTTOM; public static Texture SKYBOX_FRONT; public static Texture SKYBOX_BACK; public static Texture SKYBOX_LEFT; public static Texture SKYBOX_RIGHT; public static void loadTextures() { try { FLOOR = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/floor.png"))); FLOOR.setTextureFilter(GL11.GL_NEAREST); PLAYER = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/tree_pine_0.png"))); PLAYER.setTextureFilter(GL11.GL_NEAREST); SKYBOX_TOP = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_top.png"))); SKYBOX_TOP.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BOTTOM = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_bottom.png"))); SKYBOX_BOTTOM.setTextureFilter(GL11.GL_NEAREST); SKYBOX_FRONT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_front.png"))); SKYBOX_FRONT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BACK = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_back.png"))); SKYBOX_BACK.setTextureFilter(GL11.GL_NEAREST); SKYBOX_LEFT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_left.png"))); SKYBOX_LEFT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_RIGHT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_right.png"))); SKYBOX_RIGHT.setTextureFilter(GL11.GL_NEAREST); } catch (Exception e) { e.printStackTrace(); } } } Player.java public class Player { private Vector3f position; private float yaw; private float moveSpeed; public Player(float x, float y, float z, float yaw, float moveSpeed) { this.position = new Vector3f(x, y, z); this.yaw = yaw; this.moveSpeed = moveSpeed; } public void update() { if (Keyboard.isKeyDown(Keyboard.KEY_W)) walkForward(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_S)) walkBackwards(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_A)) strafeLeft(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_D)) strafeRight(moveSpeed); if (Mouse.isButtonDown(0)) yaw += Mouse.getDX(); LowPolyRPG.getInstance().getCamera().setPosition(-position.x, -position.y, -position.z); LowPolyRPG.getInstance().getCamera().setYaw(yaw); } public void walkForward(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw))); } public void walkBackwards(float distance) { position.setX(position.getX() - distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() + distance * (float) Math.cos(Math.toRadians(yaw))); } public void strafeLeft(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw - 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw - 90))); } public void strafeRight(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw + 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw + 90))); } public void render() { Model.PLAYER.render(new Vector3f(position.x, position.y + 12, position.z), new Vector3f(3, 3, 3), new Vector3f(0, -yaw + 90, 0), Textures.PLAYER, 128); GL11.glPushMatrix(); GL11.glTranslatef(position.getX(), position.getY(), position.getZ()); GL11.glRotatef(-yaw, 0, 1, 0); GL11.glScalef(5.8f, 21, 2.2f); GL11.glDisable(GL11.GL_LIGHTING); GL11.glLineWidth(3); GL11.glBegin(GL11.GL_LINE_STRIP); GL11.glColor3f(1, 1, 1); glVertex3f(1f, 0f, -1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(-1f, 1f, -1f); GL11.glEnd(); GL11.glEnable(GL11.GL_LIGHTING); GL11.glPopMatrix(); } public Vector3f getPosition() { return new Vector3f(-position.x, -position.y, -position.z); } public float getX() { return position.getX(); } public float getY() { return position.getY(); } public float getZ() { return position.getZ(); } public void setPosition(Vector3f position) { this.position = position; } public void setPosition(float x, float y, float z) { this.position.setX(x); this.position.setY(y); this.position.setZ(z); } } Thanks for the help.

    Read the article

  • Do you play Sudoku ?

    - by Gilles Haro
    Did you know that 11gR2 database could solve a Sudoku puzzle with a single query and, most of the time, and this in less than a second ? The following query shows you how ! Simply pass a flattened Sudoku grid to it a get the result instantaneously ! col "Solution" format a9 col "Problem" format a9 with Iteration( initialSudoku, Step, EmptyPosition ) as ( select initialSudoku, InitialSudoku, instr( InitialSudoku, '-' )        from ( select '--64----2--7-35--1--58-----27---3--4---------4--2---96-----27--7--58-6--3----18--' InitialSudoku from dual )    union all    select initialSudoku        , substr( Step, 1, EmptyPosition - 1 ) || OneDigit || substr( Step, EmptyPosition + 1 )         , instr( Step, '-', EmptyPosition + 1 )      from Iteration         , ( select to_char( rownum ) OneDigit from dual connect by rownum <= 9 ) OneDigit     where EmptyPosition > 0       and not exists          ( select null              from ( select rownum IsPossible from dual connect by rownum <= 9 )             where OneDigit = substr( Step, trunc( ( EmptyPosition - 1 ) / 9 ) * 9 + IsPossible, 1 )   -- One line must contain the 1-9 digits                or OneDigit = substr( Step, mod( EmptyPosition - 1, 9 ) - 8 + IsPossible * 9, 1 )      -- One row must contain the 1-9 digits                or OneDigit = substr( Step, mod( trunc( ( EmptyPosition - 1 ) / 3 ), 3 ) * 3           -- One square must contain the 1-9 digits                            + trunc( ( EmptyPosition - 1 ) / 27 ) * 27 + IsPossible                            + trunc( ( IsPossible - 1 ) / 3 ) * 6 , 1 )          ) ) select initialSudoku "Problem", Step "Solution"    from Iteration  where EmptyPosition = 0 ;   The Magic thing behind this is called Recursive Subquery Factoring. The Oracle documentation gives the following definition: If a subquery_factoring_clause refers to its own query_name in the subquery that defines it, then the subquery_factoring_clause is said to be recursive. A recursive subquery_factoring_clause must contain two query blocks: the first is the anchor member and the second is the recursive member. The anchor member must appear before the recursive member, and it cannot reference query_name. The anchor member can be composed of one or more query blocks combined by the set operators: UNION ALL, UNION, INTERSECT or MINUS. The recursive member must follow the anchor member and must reference query_name exactly once. You must combine the recursive member with the anchor member using the UNION ALL set operator. This new feature is a replacement of this old Hierarchical Query feature that exists in Oracle since the days of Aladdin (well, at least, release 2 of the database in 1977). Everyone remembers the old syntax : select empno, ename, job, mgr, level      from   emp      start with mgr is null      connect by prior empno = mgr; that could/should be rewritten (but not as often as it should) as withT_Emp (empno, name, level) as        ( select empno, ename, job, mgr, level             from   emp             start with mgr is null             connect by prior empno = mgr        ) select * from   T_Emp; which uses the "with" syntax, whose main advantage is to clarify the readability of the query. Although very efficient, this syntax had the disadvantage of being a Non-Ansi Sql Syntax. Ansi-Sql version of Hierarchical Query is called Recursive Subquery Factoring. As of 11gR2, Oracle got compliant with Ansi Sql and introduced Recursive Subquery Factoring. It is basically an extension of the "With" clause that enables recursion. Now, the new syntax for the query would be with T_Emp (empno, name, job, mgr, hierlevel) as       ( select E.empno, E.ename, E.job, E.mgr, 1 from emp E where E.mgr is null         union all         select E.empno, E.ename, E.job, E.mgr, T.hierlevel + 1from emp E                                                                                                            join T_Emp T on ( E.mgr = T.empno ) ) select * from   T_Emp; The anchor member is a replacement for the "start with" The recursive member is processed through iterations. It joins the Source table (EMP) with the result from the Recursive Query itself (T_Emp) Each iteration works with the results of all its preceding iterations.     Iteration 1 works on the results of the first query     Iteration 2 works on the results of Iteration 1 and first query     Iteration 3 works on the results of Iteration 1, Iteration 2 and first query. So, knowing that, the Sudoku query it self-explaining; The anchor member contains the "Problem" : The Initial Sudoku and the Position of the first "hole" in the grid. The recursive member tries to replace the considered hole with any of the 9 digit that would satisfy the 3 rules of sudoku Recursion progress through the grid until it is complete.   Another example :  Fibonaccy Numbers :  un = (un-1) + (un-2) with Fib (u1, u2, depth) as   (select 1, 1, 1 from dual    union all    select u1+u2, u1, depth+1 from Fib where depth<10) select u1 from Fib; Conclusion Oracle brings here a new feature (which, to be honest, already existed on other concurrent systems) and extends the power of the database to new boundaries. It’s now up to developers to try and test it and find more useful application than solving puzzles… But still, solving a Sudoku in less time it takes to say it remains impressive… Interesting links: You might be interested by the following links which cover different aspects of this feature Oracle Documentation Lucas Jellema 's Blog Fibonaci Numbers

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Enabling XML-documentation for code contracts

    - by DigiMortal
    One nice feature that code contracts offer is updating of code documentation. If you are using source code documenting features of Visual Studio then code contracts may automate some tasks you otherwise have to implement manually. In this posting I will show you some XML documentation files with documented contracts. I will also explain how this feature works. Enabling XML-documentation in project settings As a first thing let’s enable generating of code documentation under project settings. Open project properties, move to Build page and make check to checkbox called “XML documentation file”. Save project settings and rebuild project. When project is built go to bin/Debug folder and open the XML-file. Here is my XML. <?xml version="1.0"?> <doc>     <assembly>         <name>Eneta.Examples.CodeContracts.Testable</name>     </assembly>     <members>         <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">             <summary>             Class for generating random integers in user specified range.             </summary>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">             <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>             <param name="generator">Instance of random number generator.</param>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">             <summary>             Returns random integer in given range.             </summary>             <param name="min">Minimum value of random integer.</param>             <param name="max">Maximum value of random integer.</param>         </member>     </members> </doc> You can see nothing about code contracts here. Enabling code contracts documentation Code contracts have their own settings and conditions for documentation. Open project properties and move to Code Contracts tab. From “Contract Reference Assembly” dropdown check Build and make check to checkbox “Emit contracts into XML doc file”. And again – save project setting, build the project and move to bin/Debug folder. Now you can see that there are two files for XML-documentation: <assembly name>.XML <assembly name>.old.XML First files is documentation with contracts, second file is original documentation without contracts. Let’s see now what is inside our new XML-documentation file. <?xml version="1.0"?> <doc>   <assembly>     <name>Eneta.Examples.CodeContracts.Testable</name>   </assembly>   <members>     <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">       <summary>             Class for generating random integers in user specified range.             </summary>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">       <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>       <param name="generator">Instance of random number generator.</param>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">       <summary>             Returns random integer in given range.             </summary>       <param name="min">Minimum value of random integer.</param>       <param name="max">Maximum value of random integer.</param>       <requires description="Min must be less than max" exception="T:System.ArgumentOutOfRangeException">                 min &lt; max</requires>       <exception cref="T:System.ArgumentOutOfRangeException">                 min &gt;= max</exception>       <ensures description="Return value is out of range">                 Contract.Result&lt;int&gt;() &gt;= min &amp;&amp;                 Contract.Result&lt;int&gt;() &lt;= max</ensures>     </member>   </members> </doc> As you can see then code contracts are pretty well documented. Messages that I provided with code contracts are also available in documentation. If I wrote very good and informative messages then these messages are very useful also in contracts documentation. Code contracts and Sandcastle Sandcastle knows nothing about code contracts by default. There is separate package of file for Sandcastle that is provided you by code contracts installation. You can read from code contracts manual: “Sandcastle (http://www.codeplex.com/Sandcastle) is a freely available tool that generates help les and web sites describing your APIs, based on the XML doc comments in your source code. The CodeContracts install contains a set of les that can be copied over a Sandcastle installation to take advantage of the additional contract information. The produced documentation adds a contract section to methods with declared requires and/or ensures. In order for Sandcastle to produce Contract sections, you need to patch a number of files in its installation. Please refer to the Sandcastle Readme.txt found under Start Menu/CodeContracts/Sandcastle for instructions. A future release of Sandcastle will hopefully support contract sections without the need for this patching step.” Integrating code contracts documentation to Sandcastle will be one of my next postings about code contracts. Conclusion if you are using code documentation then documentation about code contracts can be added to documentation very easily. All you have to do is to enable XML-documentation for contracts and build your project. Later you can use Sandcastle files provided by code contracts installer to integrate contracts documentation to your output documentation package.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >