Search Results

Search found 3931 results on 158 pages for 'gwt places'.

Page 123/158 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Book Review: Oracle ADF 11gR2 Development Beginner's Guide

    - by Grant Ronald
    Packt Publishing asked me to review Oracle ADF 11gR2 Development Beginner's Guide by Vinod Krishnan, so on a couple of long flights I managed to get through the book in a couple of sittings. One point to make clear before I go into the review.  Having authored "The Quick Start Guide to Fusion Development: JDeveloper and Oracle ADF", I've written a book which covers the same topic/beginner level.  I also think that its worth stating up front that I applaud anyone who has gone  through the effort of writing a technical book. So well done Vinod.  But on to the review: The book itself is a good break down of topic areas.  Vinod starts with a quick tour around the IDE, which is an important step given all the work you do will be through the IDE.  The book then goes through the general path that I tend to always teach: a quick overview demo, ADF BC, validation, binding, UI, task flows and then the various "add on" topics like security, MDS and advanced topics.  So it covers the right topics in, IMO, the right order.  I also think the writing style flows nicely as well - Its a relatively easy book to read, it doesn't get too formal and the "Have a go hero" hands on sections will be useful for many. That said, I did pick out a number of styles/themes to the writing that I found went against the idea of a beginners guide.  For example, in writing my book, I tried to carefully avoid talking about topics not yet covered or not yet relevant at that point in someone's learning.  So, if I was a new ADF developer reading this book, did I really need to know about ADFBindingFilter and DataBindings.cpx file on page 58 - I've only just learned how to do a drag and drop simple application so showing me XML configuration files relevant to JSF/ADF lifecycle is probably going to scare me off! I found this in a couple of places, for example, the security chapter starts on page 219 but by page 222 (and most of the preceding pages are hands-on steps) we're diving into the web.xml, weblogic.xml, adf-config.xml, jsp-config.xml and jazn-data.xml.  Don't get me wrong, I'm not saying you shouldn't know this, but I feel you have to get people on a strong grounding of the concepts before showing them implementation files.  If having just learned what ADF Security is will "The initialization parameter remove.anonymous.role is set to false for the JpsFilter filter as this filter is the first filter defined in the file" really going to help me? The other theme I found which I felt didn't work was that a couple of the chapters descended into a reference guide.  For example page 159 onwards basically lists UI components and their properties.  And page 87 onwards list the attributes of ADF BC in pretty much the same way as the on line help or developer guide, and I've a personal aversion to any sort of help that says pretty much what the attribute name is e.g. "Precision Rule: this option is used to set a strict precision rule", or "Property Set: this is the property set that has to be applied to the attribute". Hmmm, I think I could have worked that out myself, what I would want to know in a beginners guide are what are these for, what might I use them for...and if I don't need to use them to create an emp/dept example them maybe it’s better to leave them out. All that said, would the book help me - yes it would.  It’s obvious that Vinod knows ADF and his style is relatively easy going and the book covers all that it has to, but I think the book could have done a better job in the educational side of guiding beginners.

    Read the article

  • Is It Worth It To Learn Experimental Languages

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • Is it worth to learn Experimental Languages?

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • It happens only at Devoxx ...

    - by arungupta
    After attending several Java conferences world wide, this was my very first time at Devoxx. Here are some items I found that happens only at Devoxx ... Pioneers of theater-style seating - This not only provides comfortable seating for each attendee but the screens are very clearly visible to everybody in the room. Intellectual level of attendees is very high - Read more explanation on the Java EE 6 lab blog. In short, a lab, 1/3 of the content delivered at Devoxx 2011, could not be completed at other developer days in more than 1/3 the time. Snack box for lunches - Even though this suits well to the healthy lifestyle of multiple-snacks-during-a-day style but leaves attendees hungry sooner in the day. The longer breaks before the next snack in the evening does not help at all. Fortunately, Azure cupcakes and Android ice creams turned out to be handy. I finally carried my own apple :-) Wrist band instead of lanyard - The good part about this is that once tied to your hand then you are less likely to forget in your room. But OTOH you are a pretty much a branded conference attendee all through out the city. It was cost effective as it costed 20c as opposed to 1 euro for the lanyard. Live streaming from theater #8 (the biggest room) on parleys.com All talks recorded and released on parleys.com over next year. This allows attendees to not to miss any session and watch replay at their own leisure. Stephan promised to start sharing the sessions by mid December this year. No need to pre-register for a session - This is true for most of the conferences but bigger rooms (+ overflow room for key sessions) provide sufficient space for all those who want to attend the session. And of course all sessions are available on parleys.com anyway! Community votes on whiteboard - Devoxx attendees gets a chance to vote on topics ranging from their favorite non-Java language, operating system, or love from Oracle. Captured pictures at the end of Day 2 are shown below. Movie on the last but one night - This year it was The Adventures of Tintin and was lots of fun. Fries with mayo - This is a typical Belgian thing. Guys going in ladies room to avoid the long queues ... wow! Tweet wall everywhere and I mean literally everywhere, in rooms, hallways, front desk, and other places. The tweet picking algorithm was not very clear as I never saw my tweet appear on the wall ;-) You can also watch it at wall.devoxx.com. Cozy speaker dinner with great food and wine List of parallel and upcoming sessions displayed on the screen - This makes the information more explicit with the attendees. REST API with multiple mobile clients - This API is also used by some other conferences as well. And there always is iphone.devoxx.com. Steering committee members were recognized multiple times. The committee members were clearly identifiable wearing red hoodies. The wireless SSID was intuitive "Devoxx" but hidden to avoid some crap from Microsoft Windows. All of 9000 addresses were used up most of the times with each attendee having multiple devices. A 1 GB fibre optic cable was stretched to Metropolis to support the required network bandwidth. Stephan is already planning to upgrade the equipment and have a better infrastructure next year. Free water, soda, juice in a cooler Kinect connected to TV screens so that attendees can use their hands to browse through the list of sesssions. #devoxxblog, #devoxxwomen, #devoxxfrance, #devoxxgreat, #devoxxsuggestions And Devoxx attendees are called Devoxxians ... how cool is that ? :-) What other things do you think happen only at Devoxx ? And now the pictures from the community whiteboard: And a more complete album (including bigger pics of community votes) is available below:

    Read the article

  • What 5 things should SQL Server get rid of?

    - by BuckWoody
    I’ve been “tagged” by my friend Paul Randal. It’s a high-tech way of making someone else do what you want, but since it’s Paul, well, I guess I’m OK with that. He’s asked in his recent blog entry “What five things would you get rid of in SQL Server if you were in charge?” This is, of course, a delicate issue. After all, I work at Microsoft, so anything I say here might be taken as a criticism that would require action – but of course it really doesn’t. Interestingly, you may have more to do with what goes in to SQL Server than I did even as a Program Manager where I “owned” a feature. Unlike many places I’ve worked, Microsoft really does drive its products by what its users want – not every time, and not every user request, mind you, but overall I think we hit the mark pretty well. So, with all of that said, and of course the obligatory statement of “these are my own opinions, and have nothing to do with any official Microsoft position in any way, and do not reflect the opinions of other Microsoft employees or management”, here goes. 1. Get rid of SQL Server Management Studio Does that surprise you? After all, when I was a Program Manager, I actually owned the general architecture for SSMS. But those on my team probably would have been able to guess this one for you. I think that SSMS is a fine development tool. But I think that it does less of a good job for managing a system. It’s based on Visual Studio, probably one of the best development IDE’s around. And when I develop code, I really like it. But for a monitoring/management tool, I prefer a snap-in to the Microsoft Management Console (MMC). I know, the old one (prior to 3.0) was kludgy, difficult to use and program in. But that’s changed. Of course, when I bring this up, you’ll probably immediately say “But I don’t have that in XP.” And that’s one of the reasons we didn’t go there. (But I still don’t like SSMS for management.) 2. ShrinkDB I think this discussion has been done to death, so I’ll leave it at that. 3. SQL Server Agent Does that one surprise you as well? In my mind, since we ALWAYS ride on Windows, just use the task scheduler there, along with PowerShell. You could log the results in Windows logs, files, back into SQL Server, whatever. It’s just a complexity we don’t need in SQL Server. 4. SQL Server Error Logs We have a full logging setup in Windows. They’re well done, easy to understand and ubiquitous. We should just use that. 5. Several SKU’s I won’t say which, but we have a few SKU’s of SQL Server that need to go. And we need to figure out how to help you understand clearly where you need to go to Enterprise or Data Center.  Most folks are trying to push Standard edition to do things it isn’t designed to do, and then they think SQL Server won’t scale. I think we can do a better job of showing you where Standard Edition will hit the wall, and I think with fewer choices it would be pretty simple for you to pick the right one. Well, once again I’ve probably puzzled some folks and angered others. I think my work here is done. :) Back to you, Paul. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Problem with creating a deterministic finite automata (DFA) - Mercury

    - by Jabba The hut
    I would like to have a deterministic finite automata (DFA) simulated in Mercury. But I’m s(t)uck at several places. Formally, a DFA is described with the following characteristics: a setOfStates S, an inputAlphabet E <-- summation symbol, a transitionFunction : S × E -- S, a startState s € S, a setOfAcceptableFinalStates F =C S. A DFA will always starts in the start state. Then the DFA will read all the characters on the input, one by one. Based on the current input character and the current state, there will be made to a new state. These transitions are defined in the transitions function. when the DFA is in one of his acceptable final states, after reading the last character, then will the DFA accept the input, If not, then the input will be is rejected. The figure shows a DFA the accepting strings where the amount of zeros, is a plurality of three. Condition 1 is the initial state, and also the only acceptable state. for each input character is the corresponding arc followed to the next state. Link to Figure What must be done A type “mystate” which represents a state. Each state has a number which is used for identification. A type “transition” that represents a possible transition between states. Each transition has a source_state, an input_character, and a final_state. A type “statemachine” that represents the entire DFA. In the solution, the DFA must have the following properties: The set of all states, the input alphabet, a transition function, represented as a set of possible transitions, a set of accepting final states, a current state of the DFA A predicate “init_machine (state machine :: out)” which unifies his arguments with the DFA, as shown as in the Figure. The current state for the DFA is set to his initial state, namely, 1. The input alphabet of the DFA is composed of the characters '0'and '1'. A user can enter a text, which will be controlled by the DFA. the program will continues until the user types Ctrl-D and simulates an EOF. If the user use characters that are not allowed into the input alphabet of the DFA, then there will be an error message end the program will close. (pred require) Example Enter a sentence: 0110 String is not ok! Enter a sentence: 011101 String is not ok! Enter a sentence: 110100 String is ok! Enter a sentence: 000110010 String is ok! Enter a sentence: 011102 Uncaught exception Mercury: Software Error: Character does not belong to the input alphabet! the thing wat I have. :- module dfa. :- interface. :- import_module io. :- pred main(io.state::di, io.state::uo) is det. :- implementation. :- import_module int,string,list,bool. 1 :- type mystate ---> state(int). 2 :- type transition ---> trans(source_state::mystate, input_character::bool, final_state::mystate). 3 (error, finale_state and current_state and input_character) :- type statemachine ---> dfa(list(mystate),list(input_character),list(transition),list(final_state),current_state(mystate)) 4 missing a lot :- pred init_machine(statemachine :: out) is det. %init_machine(statemachine(L_Mystate,0,L_transition,L_final_state,1)) :- <-probably fault 5 not perfect main(!IO) :- io.write_string("\nEnter a sentence: ", !IO), io.read_line_as_string(Input, !IO), ( Invoer = ok(StringVar), S1 = string.strip(StringVar), (if S1 = "mustbeabool" then io.write_string("Sentenceis Ok! ", !IO) else io.write_string("Sentence is not Ok!.", !IO)), main(!IO) ; Invoer = eof ; Invoer = error(ErrorCode), io.format("%s\n", [s(io.error_message(ErrorCode))], !IO) ). Hope you can help me kind regards

    Read the article

  • Strategies for invoking subclass methods on generic objects

    - by Brad Patton
    I've run into this issue in a number of places and have solved it a bunch of different ways but looking for other solutions or opinions on how to address. The scenario is when you have a collection of objects all based off of the same superclass but you want to perform certain actions based only on instances of some of the subclasses. One contrived example of this might be an HTML document made up of elements. You could have a superclass named HTMLELement and subclasses of Headings, Paragraphs, Images, Comments, etc. To invoke a common action across all of the objects you declare a virtual method in the superclass and specific implementations in all of the subclasses. So to render the document you could loop all of the different objects in the document and call a common Render() method on each instance. It's the case where again using the same generic objects in the collection I want to perform different actions for instances of specific subclass (or set of subclasses). For example (an remember this is just an example) when iterating over the collection, elements with external links need to be downloaded (e.g. JS, CSS, images) and some might require additional parsing (JS, CSS). What's the best way to handle those special cases. Some of the strategies I've used or seen used include: Virtual methods in the base class. So in the base class you have a virtual LoadExternalContent() method that does nothing and then override it in the specific subclasses that need to implement it. The benefit being that in the calling code there is no object testing you send the same message to each object and let most of them ignore it. Two downsides that I can think of. First it can make the base class very cluttered with methods that have nothing to do with most of the hierarchy. Second it assumes all of the work can be done in the called method and doesn't handle the case where there might be additional context specific actions in the calling code (i.e. you want to do something in the UI and not the model). Have methods on the class to uniquely identify the objects. This could include methods like ClassName() which return a string with the class name or other return values like enums or booleans (IsImage()). The benefit is that the calling code can use if or switch statements to filter objects to perform class specific actions. The downside is that for every new class you need to implement these methods and can look cluttered. Also performance could be less than some of the other options. Use language features to identify objects. This includes reflection and language operators to identify the objects. For example in C# there is the is operator that returns true if the instance matches the specified class. The benefit is no additional code to implement in your object hierarchy. The only downside seems to be the lack of using something like a switch statement and the fact that your calling code is a little more cluttered. Are there other strategies I am missing? Thoughts on best approaches?

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Why Are We Here?

    - by Jonathan Mills
    Back in the early 2000s, Toyota had a vision of building the number one best selling minivan in North America. Their current minivan, the Sienna, was small, underpowered, and badly needed help.  Yuji Yokoya was given the job of re-engineering the Sienna. There was just one problem, Yuji, lived in Japan. He did not know the people or places that he would be engineering for. Believe it or not, Japan is nothing like North America. So, what does a chief engineer do in a situation like that? He packed up his team and flew halfway around the world. He made a commitment to drive through every state in the US, every province in Canada, and Mexico. He met the people and drove the roads that the Sienna would be driving. And guess what, what he learned on that trip revolutionized the Sienna. The innovations he made, sent the Sienna to number one. Why? Because he knew who he was building his product for. He knew, why he was there.Let me ask you this, do you know why you are building what you are building? As a member of a product team, can you tell me how your product will be used in the real world? As you are writing code, building test plans, writing stories, or any of the other project tasks, can you picture the face of a person who will be using what you are building? All to often, the answer to those questions is, no. Why is it important? Because, every day, project team members make assumptions. Over a given project, it is safe to say project team members will make thousands of assumptions about what they are doing. And all to often, those assumptions are not quite right. Its not that they are not good at their job, its just that they don’t really know why they are there.So, what to do? First and foremost, stop doing what you are doing. Yes, really. Schedule some time to go visit the people who will be using your product. Don’t invite them to you, go to them. Watch them work. Interact with them. Ask them questions. Maybe even try it out yourself. This serves two purposes. One, It shows them that you care about them. They will be far more engaged in your project if they feel like you care. And nothing says you care more that spending some time. Second, if gives you the proper frame of reference for you work. It gives you something tangible to go back to as you are building your product. As you make the thousands of assumptions that you will make over the life of your project, it gives you something to see in your mind that makes it real to you.Ultimately, setting a proper frame of reference is critical to the overall success of a project. The funny thing is, it really does not even take that long. In most cases, a 2-3 hour session will give you most of what you need to get the right insight. For the project, it will be the best 2 hours you could spend.

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • Moving abroad - Relocation advice

    - by Tim Koekkoek
    Oracle offers graduates from different European countries the opportunity to start their career abroad. Some already have experience with living abroad as they have done an exchange semester or internship in another county, for others it is the first time they will move abroad. Rui started in October 2011 as a Business Development Consultant in Dublin and moved from Portugal to Dublin, Ireland to start his career. For those planning to leave their home country and who desire to work abroad, he will share some tips and tricks in this article. When you’re faced with an opportunity like this, there are lots of things that will come to your mind. Sometimes it can be either very exciting, or even stressful. 1. First of all, try to relax. If you are certain you are moving abroad, all you need to do is some research about the country where you’re going to live, get to know its culture (gastronomy, important dates and events, its economy and effective ways to keep you in touch with your family and friends – such as mobile companies and Internet services), and start to understand the best locations (with good access) you could/should live in are. Don’t forget that initially you can be limited by transport and therefore it is important to explore the ideal places for you. During this time, Oracle provides everything you’ll need (papers, documents, etc.) to cross borders. 2. When you arrive, you understand that you are in a new country, in a new place, where all things (or most) are unknown to you. Before you panic, try to see it as a new challenge where new opportunities will come. Sometimes, it’s not easy I know, but the very best a new place has to give to you, is the opportunity to understand a new culture, get to know other people, other ways of working, and grow both as a person and professionally. So, you have nothing to lose in this kind of experiment. 3. When you arrive at Oracle, there’s a fantastic team that will help you with settling in, HR, Payroll, Relocation, IT. In my case, Oracle helped me with the relocation, they supported me to arrange everything such as helping out with all the paperwork and finding a new apartment. As you can see they will do their best to help you to be successful! 4. Engage with your new co-workers. Going to a place where you don’t know anyone can be tough sometimes but see it as an opportunity to meet people from all over the world and share experiences. Embrace it. 5. Plan ahead, try to get the most information possible and use it. Oracle is a multinational enterprise that will allow you to get to know a new labour market and give you the flexibility you need to understand your view of employment and occupation, giving you the very best opportunities to join different teams and working areas, so that you can work where you fit best. Good luck! If you’re thinking about starting a career abroad, read the following article: http://www.overseasdigest.com/movingtips.htm it can be very useful to you. Interested in starting your career at Oracle like Rui has? Please have a look at https://campus.oracle.com for all of our latest vacancies.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Profiling Startup Of VS2012 &ndash; dotTrace Profiler

    - by Alois Kraus
    Jetbrains which is famous for the Resharper tool has also a profiler in its portfolio. I downloaded dotTrace 5.2 Professional (569€+VAT) to check how far I can profile the startup of VS2012. The most interesting startup option is “.NET Process”. With that you can profile the next started .NET process which is very useful if you want to profile an application which is not started by you.     I did select Tracing as and Wall time to get similar options across all profilers. For some reason the attach option did not work with .NET 4.5 on my home machine. But I am sure that it did work with .NET 4.0 some time ago. Since we are profiling devenv.exe we can also select “Standalone Application” and start it from the profiler. The startup time of VS does increase about a factor 3 but that is ok. You get mainly three windows to work with. The first one shows the threads where you can drill down thread wise where most time is spent. I The next window is the call tree which does merge all threads together in a similar view. The last and most useful view in my opinion is the Plain List window which is nearly the same as the Method Grid in Ants Profiler. But this time we do get when I enable the Show system functions checkbox not a 150 but 19407 methods to choose from! I really tried with Ants Profiler to find something about out how VS does work but look how much we were missing! When I double click on a method I do get in the lower pane the called methods and their respective timings. This is something really useful and I can nicely drill down to the most important stuff. The measured time seems to be Wall Clock time which is a good thing to see where my time is really spent. You can also use Sampling as profiling method but this does give you much less information. Except for getting a first idea where to look first this profiling mode is not very useful to understand how you system does interact.   The options have a good list of presets to hide by default many method and gray them out to concentrate on your code. It does not filter anything out if you enable Show system functions. By default methods from these assemblies are hidden or if the checkbox is checked grayed out. All in all JetBrains has made a nice profiler which does show great detail and it has nice drill down capabilities. The only thing is that I do not trust its measured timings. I did fall several times into the trap with this one to optimize at places which were already fast but the profiler did show high times in these methods. After measuring with Tracing I was certain that the measured times were greatly exaggerated. Especially when IO is involved it seems to have a hard time to subtract its own overhead. What I did miss most was the possibility to profile not only the next started process but to be able to select a process by name and perhaps a count to profile the next n processes of this name. Next: YourKit

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >