Search Results

Search found 2772 results on 111 pages for 'hour'.

Page 77/111 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Don’t miss rare Venus transit across Sun on June 5th. Once in a life time event.

    - by Gopinath
    Space lovers here is a rare event you don’t want to miss. On June 5th or 6th of 2012,depending on which part of the globe you live, the planet Venus will pass across Sun and it will not happen again until 2117. During the six hour long spectacular transit you can see the shadow of Venus cross Sun. The transit of Venus occurs in pairs eight years apart, with the previous one taking place in 2004. The next pair of transits occurs after 105.5 & 121.5 years later. The best place to watch the event would be a planetarium nearby with telescope facility. If not you watch it directly but must protect your eyes at all times with proper solar filters. Where can we see the transit? The transit of Venus is going to be clearly visible in Europe, Asia, United States and some part of Australia. Americans will be able to see transit in the evening of Tuesday, June 5, 2012. Eurasians and Africans can see the transit in the morning of June 6, 2012. At what time the event occurs? The principal events occurring during a transit are conveniently characterized by contacts, analogous to the contacts of an annular solar eclipse. The transit begins with contact I, the instant the planet’s disk is externally tangent to the Sun. Shortly after contact I, the planet can be seen as a small notch along the solar limb. The entire disk of the planet is first seen at contact II when the planet is internally tangent to the Sun. Over the course of several hours, the silhouetted planet slowly traverses the solar disk. At contact III, the planet reaches the opposite limb and once again is internally tangent to the Sun. Finally, the transit ends at contact IV when the planet’s limb is externally tangent to the Sun. Event Universal Time Contact I 22:09:38 Contact II 22:27:34 Greatest 01:29:36 Contact III 04:31:39 Contact IV 04:49:35   Transit of Venus animation Here is a nice video animation on the transit of Venus Map courtesy of Steven van Roode, source NASA

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • Cost justification for buying a 32GB superfast Alienware M18x with a price tag of around £5K ($10K)

    - by tonyrogerson
    When considering buying a laptop that’s going to cost me around £5,000 I really need to justify the purchase from a business perspective; my Lenovo W700 has served me very well for the last 2 years, it’s an extremely good machine and as solid as a rock (and as heavy), alas though it is limited to the 8GB. As SQL Server 2012 approaches and with my interest in working in the Business Intelligence space over the next year or two it is clear I need a powerful machine that I can run a full infrastructure though virtualised. My requirements For High Availability / Disaster Recovery research and demonstration Machine for a domain controller Four machines in a shared disk cluster (SQL Server Clustering active – active etc.) Five  machines in a file share cluster (SQL Server Availability Groups) For Business Intelligence research and demonstration Not entirely sure how many machine I want to run here, but it would be to cover the entire BI stack in an enterprise setting, sharepoint, sql server etc. For Big Data Research I have a fondness for the NoSQL approach to scalability and dealing with large volumes so I need a number of machines to research VoltDB, Hadoop etc. As you can see the requirements for a SQL Server consultant to service their clients well is considerable; will 8GB suffice, alas no, it will no longer do. I’m a very strong believer that in order to do your job well you must expense it, short cuts only cost you time, waiting 5 minutes instead of an hour for something to run not only saves me time but my clients time, I can do things quicker and more importantly I can demonstrate concepts. My W700 with the 8GB of RAM and SSD’s cost me around £3.5K two years ago, to be honest I’ve not got the full use I wanted out of it but the machine has had the power when I’ve needed it, it’s served me and my clients well. Alienware now do a model (the M18x) with 32GB of RAM; yes 32GB in a laptop! Dual drives so I can whack a couple of really good SSD’s in there, a quad core with hyper threading i7 and a decent speed. I can reduce the cost of the memory by getting it from Crucial, so instead of £1.5K for 32GB it will be around £900, I can also cost save on the SSD as well. The beauty about the M18x is that it is USB3.0, SATA 3 and also really importantly has eSATA, running VM’s will never be easier, I can have a removeable SSD with my VM’s on it and can plug it into my home machine or laptop – an ideal world! The initial outlay of £5K is peanuts compared to the benefits I’ll give my clients, I will be able to present real enterprise concepts, I’ll also be able to give training on those real enterprise concepts and with real, albeit virtualised machines.

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Back home :-)

    - by Mike Dietrich
    Wrote this entry last night in the ICE from Stuttgart to Munich but the conncetion broke: 28.5 hour journey - and close by now. Actually I would have been even closer if our TGV wouldn't have had break problems as soon as we had entered German territory. And you don't want a train which goes up to a speed of 200 mph having issues with its breaks, right? So we missed the connection in Stuttgart but I've catched the last train this night towards Munich. Distance approx 1900 km all together. Usually it takes 2.5 hours with a direct flight with Air Lingus from Munich or a bit more when you'll go through Zurich or Frankfurt. But at least you meet more people and see a bit more from the landscapes passing by :-) Except for the break problem everything worked out well so far (I'm no there finally!). I had 4 hours to change in Paris from Gare de Nord to Gare de l'Est and one thing I really have to point out: the people working for SNCF, the French National Railways, were so organized and helpful, purely amazing. I asked the man at the counter where I had to pick up my prepaid tickets for directions to Gare de l'Est - and after we had a chat about Marlene Dietrich he just grabbed his iPhone, started Google Earth and showed me the way to walk. I pretty sure it's a stupid stereotype that people in Paris or France are so unfriendly to foreigners if they don't speak French. In my past 3 stays or travels to Paris in the past 2 years I had only great experiences. And another thing I really enjoy when being in France: the food!!! The sandwich I had at the train station was packed with yummy goat cheese. And there's always Paul. You might ask yourself: Who the heck is Paul? That's Paul - or actually their website. And at Paul's they serve usually excellent fruit tartes - and this time a nice Gateau Au Chocolate. And very good Cafe Cremé as well :-) That's actually the positive part traveling this way: the food you'll get is much better than the airline food - if your airline still serves something called food ...

    Read the article

  • Devoxx 2011 Trip Report + Pictures

    - by arungupta
    3350 attendees from 40 countries lived in "paradise" for 5 days last week. This paradise had 170+ rock star speakers delivering 200+ hours of technical content in about 150 sessions. And it truly was a paradise with a clear differentiation from other Java conferences. There were several Oracle speakers at the paradise covering the entire gamut of Java platform. I delivered a Java EE 6 hands-on lab (new content), showcased Java EE 7 and GlassFish 4.0 early work at the keynote, and participated in a panel to talk about Contexts and Dependency Injection. The demo in the keynote showed how to deploy a Java EE application in a managed environment. The demo showed a Conference Planner application that can be used by conference organizers to display sessions, tracks, and speaker information. This same application can be deployed and display data from JavaOne 2011 or Devoxx 2011 based upon the SQL chosen for database initialization. If javaone-sf-2011.sql is chosen for datbase initialization then the application looks like as shown: If devoxx-2011.sql is chosen then the application looks like as shown: And of course, clicking on Tracks, Speakers, Sessions shows you information from the respective conference. The complete source code for the application and detailed instructions are availaable at glassfish.org/javaone2011. In short: Download the sample app and unzip Download GlassFish build b05. Download platform-specific Load Balancer template Run "bin/install.sh" to configure GlassFish Pick javaone-sf-2011.sql or devoxx-2011.sql for database initialization You can also watch the application in action in this video: A breaking news shared at the conference was that Devoxx France is coming from April 18- 20 and 75% of the talks will be in French. Stay tuned for more details on that. I'm sure Antonio and gang will put up a great show out there! Just a tip for the first timers to Devoxx ... A bus leaves from Brussels airport to Antwerp city center between 4am - 11pm at the top of every hour, takes about 45 minutes, and costs 10 euros (only cash). Take a tram #6 (going towards Luchtbal) from Astrid station (next to the city center) and get off at the last station for Metropolis. It takes about 15 minutes. Purchase a day pass at the station using kiosks (much cheaper) or you can buy in the bus as well (about double the price). Either way, cash only. Here are a few pictures captured from the event: And the complete album here: Thank you Stephan for giving me an opportunity to speak at my first Devoxx. I hope to be back next year, just in time for Java EE 7 going final!

    Read the article

  • Indian government department have more unsecure website then others.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/26/indian-government-department-have-more-unsecure-website-then-others.aspxOne of my friend share his college experience with me. He is not related with computer science. One day he told me that Ankia Fadia come to their college. In front of many student he show how to hack BSNL website by tricks. he break the flow how BSNL site work. I have told them BSNL is one of the most unsecure website of India   If you logged-in to website maybe it’s run in few seconds but sometime it run in 58 minute. OK this is not grammar mistake 58 minute is less then 1 hour. This means open a tab and put the link to open. it will open in hours. If you are using IE8, Chrome and Firefox you will be forced to use IE7 or downgrade. I simply use Ie7 mode in IE for make it work. This happen because they use something that is called DynaTrace. This site is most unsecure. now guess how !   Suppose my username is xyz and password is abc. How I can reset the password I simply go to website and in their site when I do reset my password he told me to fill password and password will not worked here.you can use here password here to reset my password. Remember that username are different then broadband username and password. Suppose if I want to reset your password I simply need to know your broadband username and I can reset it myself. I just logged in with my username and when I open the page for reset password I can fill your bb username and password will work here. I have not tried this. the broadband username can easily guess. this is depend on same way how people’s broandband username made. IS this Safe ? Nope, There are many thing on the site which make me feel that is 1900 century website. They still lived in popup life.  These site are nothing but a crap. not work most of time and when work it’s run too slowly.

    Read the article

  • SQL SERVER – Display Datetime in Specific Format – SQL in Sixty Seconds #033 – Video

    - by pinaldave
    A very common requirement of developers is to format datetime to their specific need. Every geographic location has different need of the date formats. Some countries follow the standard of mm/dd/yy and some countries as dd/mm/yy. The need of developer changes as geographic location changes. In SQL Server there are various functions to aid this requirement. There is function CAST, which developers have been using for a long time as well function CONVERT which is a more enhanced version of CAST. In the latest version of SQL Server 2012 a new function FORMAT is introduced as well. In this SQL in Sixty Seconds video we cover two different methods to display the datetime in specific format. 1) CONVERT function and 2) FORMAT function. Let me know what you think of this video. Here is the script which is used in the video: -- http://blog.SQLAuthority.com -- SQL Server 2000/2005/2008/2012 onwards -- Datetime SELECT CONVERT(VARCHAR(30),GETDATE()) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),10) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),110) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),5) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),105) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; GO -- SQL Server 2012 onwards -- Various format of Datetime SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT FORMAT ( GETDATE(), 'dd mon yyyy HH:m:ss:mmm', 'en-US' ) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; SELECT FORMAT ( GETDATE(), 'HH:m:ss:mmm', 'en-US' ) AS DateConvert; GO -- Specific usage of Format function SELECT FORMAT(GETDATE(), N'"Current Time is "dddd MMMM dd, yyyy', 'en-US') AS CurrentTimeString; This video discusses CONVERT and FORMAT in simple manner but the subject is much deeper and there are lots of information to cover along with it. I strongly suggest that you go over related blog posts in next section as there are wealth of knowledge discussed there. Related Tips in SQL in Sixty Seconds: Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 Retrieve – Select Only Date Part From DateTime – Best Practice Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime DATE and TIME in SQL Server 2008 Function to Round Up Time to Nearest Minutes Interval Get Date Time in Any Format – UDF – User Defined Functions Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 Difference Between DATETIME and DATETIME2 Saturday Fun Puzzle with SQL Server DATETIME2 and CAST What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Network Solutions Gold VIP Program

    - by GGBlogger
    Today I received an email advising me “Congratulations! You have been selected for the Network Solutions Gold VIP Program” Now I get a bunch of messages from Network Solutions because a long time ago I chose them as my registrar. I usually just save these to my Outlook Network Solutions folder and move on. This time my wife was in the room and I chuckled and said something about “Network Solutions has made me a Gold VIP member.” She said “So what does that mean?” Prompted by that I scrolled down the page to look at my “perks". Let’s see – Special pricing, 1 year free Web Forwarding, 1 hour of free support… etc etc until I hit “Enrollment in SafeRenew* SafeRenew* simplifies your renewal process. It protects your domain name registrations and corresponding services in the event you forget or are unable to renew on time. I’m thinking “Now ain’t that neat” I don’t use auto renew anyway and never have. Then I continued reading: Beginning June 24th, 2012 your domains* (that’s 10 days away) will automatically renew. To ensure continuation of service, please be certain you have a valid credit card on file. If you do not wish for your services to be automatically renewed, please click here to log into account manager and opt out of the SafeRenew™ service. Whoa Network Solutions is going to start spending my money without so much as a by your leave sir!!! I don’t bloody think so and how truly magnanimous of them I can “click here” to opt out of what I didn’t opt in for. Talk about furious! I still am. Now the kicker is: if my wife hadn’t been curious and if I’d had a working credit card on file I wouldn’t have know this until one of my unwanted domain names auto renewed. Of course I was told “Oh – we’d have refunded your money” to which I say bull. In my view Networks Solutions is guilty of a crime of some sort. They DO NOT have a right to spend my money without asking!!!!! So watch out folks.

    Read the article

  • GlassFish Back from Devoxx 2011 Mature Java EE 6 and EE 7 well on its way

    - by alexismp
    I'm back from my 8th (!) Devoxx conference (I don't think I've missed one since 2004) and this conference keeps delivering on the promise of a Java developer paradise week. GlassFish was covered in many different ways and I was not involved in a good number of them which can only be a good sign! Several folks asked me when my Java EE 6 session with Antonio Goncalves was scheduled (we've been covering this for the past two years in University sessions, hands-on labs and regular sessions). It turns out we didn't team up this year (Antonio was crazy busy preparing for Devoxx France) and I had a regular GlassFish session. Instead, this year, Bert Ertman and Paul Bakker covered the 3-hour Java EE 6 University session ("Duke’s Duct Tape Adventures") on the very first day (using GlassFish) with great success it seems. The Java EE 6 lab was also a hit with a full room of folks covering a lot of technical ground in 2.5 hours (with GlassFish of course). GlassFish was also mentioned during Cameron Purdy's keynote (pretty natural even if that surprised a number of folks that had not been closely following GlassFish) but also in Stephan Janssen's Keynote as the engine powering Parleys.com. In fact Stephan was a speaker in the GlassFish session describing how they went from a single-instance Tomcat setup to a clustered GlassFish + MQ environment. Also in the session was Johan Vos (of Mollom fame, along other things). Both of these customer testimonials were made possible because GlassFish has been delivering full Java EE 6 implementations for almost two years now which is plenty of time to see serious production deployments on it. The Java EE Gathering (BOF) was very well attended and very lively with many spec leads participating and discussing progress and also pain points with folks in the room. Thanks to all those attending this session, a good number of RFE's, and priority points came out of this. While this wasn't a GlassFish session by any means, it's great to have the current RESTful Admin and upcoming Java EE 7 planned features be a satisfactory answer to some of the requests from the attendance. Last but certainly not least, the GlassFish team is busy with Java EE 7 and version 4 of the product. This was discussed and shown during the Java EE keynote and in greater details in Jerome Dochez' session. If any indication, the tweets on his demo (virtualization, provisioning, etc...) were very encouraging. Java EE 6 adoption is doing great and GlassFish, being a production-quality reference implementation, is one of the first to benefit from this. And with GlassFish 4.0, we're looking at increasing the product and community adoption by offering a pragmatic technical solution to Java EE PaaS deployments. Stay tuned ! (the impatient in you is encouraged to grab a 4.0 build and provide feedback).

    Read the article

  • How to bill a client for frequently-interrupted time

    - by Greg
    I find that when I'm working on hourly-billable projects (in particular, those that are research/design/architecture-oriented as opposed to straight coding) that I'm easily distracted by any number of things (email, grab a drink (loss of focus, but nature happens), link off the webpage I was reading, wandering mind (easy when the job calls for a lot of thinking), etc.) This results in very fragmented time, far too incremental IMO to accurately track with a timeclock, and some time very gray. I frequently end up billing for only some fraction of the elapsed time I spent in order to feel fair, but sometimes it takes a really long time to put in an 8-hour day. By contrast, when I've worked for salary I've not worried about whether I'm actively working at any given minute, I just get the job done, and I've never had anything but stellar reviews/feedback from past salaried employers, so I think I get the job done well. I personally believe in an 80/20 cycle: I get 80% of my work done during an inspired 20% of my time. But I have to screw around the other 80% of the time in order to get that first 20%. So the question: what billing/time-tracking policy can I adopt in order to be fair to my hourly customers without having to write off my own less-productive 80% that a salaried employer is willing to overlook in light of the complete package? Note: This question is not about how to be more productive or focused. It's about how to work around whatever salient limitations that I have in a way that's both fair to me and to my customers. Update: A little clarification (to pre-emptively stop some righteous indignation): I currently have a half dozen different project/client groups. It's not a great situation and I'm working at reducing it down to two, but that's my current reality. It's very easy to get off on a thread related to a different project than the one I'm clocking, and I'm not always conscious of it at the time. [I did not intend the question to mean that I was off playing games or making personal calls, etc., and have adjusted wording above to be clearer. Most of the time. I am only human, and sometimes the mind does force you to take a break! :-)]

    Read the article

  • The Legend of the Filtered Index

    - by Johnm
    Once upon a time there was a big and bulky twenty-nine million row table. He tempestuously hoarded data like a maddened shopper amid a clearance sale. Despite his leviathan nature and eager appetite he loved to share his treasures. Multitudes from all around would embark upon an epiphanous journey to sample contents of his mythical purse of knowledge. After a long day of performing countless table scans the table was overcome with fatigue. After a short period of unavailability, he decided that he needed to consider a new way to share his prized possessions in a more efficient manner. Thus, a non-clustered index was born. She dutifully directed the pilgrims that sought the table's data - no longer would those despicable table scans darken the doorsteps of this quaint village. and yet, the table's veracious appetite did not wane. Any bit or byte that wondered near him was consumed with vigor. His columns and rows continued to expand beyond the expectations of even the most liberal estimation. As his rows grew grander they became more difficult to organize and maintain. The once bright and cheerful disposition of the non-clustered index began to dim. The wait time for those who sought the table's treasures began to increase. Some of those who came to nibble upon the banquet of knowledge even timed-out and never realized their aspired enlightenment. After a period of heart-wrenching introspection, the table decided to drop the index and attempt another solution. At the darkest hour of the table's desperation came a grand flash of light. As his eyes regained their vision there stood several creatures who looked very similar to his former, beloved, non-clustered index. They all spoke in unison as they introduced themselves: "Fear not, for we come to organize your data and direct those who seek to partake in it. We are the filtered index." Immediately, the filtered indexes began to scurry about. One took control of the past quarter's data. Another took control of the previous quarter's data. All of the remaining filtered indexes followed suit. As the nearly gluttonous habits of the table scaled forward more filtered indexes appeared. Regardless of the table's size, all of the eagerly awaiting data seekers were delivered data as quickly as a Jimmy John's sandwich. The table was moved to tears. All in the land of data rejoiced and all lived happily ever after, at least until the next data challenge crept from the fearsome cave of the unknown. The End.

    Read the article

  • SQL SERVER – Public Training and Private Training – Differences and Similarities

    - by pinaldave
    Earlier this year, I was on Road SQL Server Seminars. I did many SQL Server Performance Trainings and SQL Server Performance Consultations throughout the year but I feel the most rewarding exercise is always the one when instructor learns something from students, too. I was just talking to my wife, Nupur – she manages my logistics and administration related activities – and she pointed out that this year I have done 62% consultations and 38% trainings. I was bit surprised as I thought the numbers would be reversed. Every time I review the year, I think of training done at organizations. Well, I cannot argue with reality, I have done more consultations (some would call them projects) than training. I told my wife that I enjoy consultations more than training. She promptly asked me a question which was not directly related but made me think for long time, and in the end resulted in this blog post. Nupur asked me: what do I enjoy the most, public training or private training? I had a long conversation with her on this subject. I am not going to write long blog post which can change your life here. This is rather a small post condensing my one hour discussion into 200 words. Public Training is fun because… There are lots of different kinds of attendees There are always vivid questions Lots of questions on questions Less interest in theory and more interest in demos Good opportunity of future business Private Training is fun because… There is a focused interest One question is discussed deeply because of existing company issues More interest in “how it happened” concepts – under the hood operations Good connection with attendees This is also a good opportunity of future business Here I will stop my monologue and I want to open up this question to all of you: Question to Attendees - Which one do you enjoy the most – Public Training or Private Training? Question to Trainers - What do you enjoy the most – Public Training or Private Training? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • A New Home for E-Business Suite Customer Adoption Information

    - by linda.fishman.hoyle
    Phew! I made it! A new home with my name. Let's talk about E-Business Suite. So much is going on and more and more customers are upgrading and implementing the latest release. I think I will highlight in this blog entry the most recent press release we issued 2 weeks ago about our Applications Unlimited success but in the release, we name several customers who are live on E-Business Suite Release 12.1 and then have a fabulous quote from a customer who is doing great things with our product.   Here is a link to the press release To make it easy for you, I am pulling out just the E-Business Suite information Oracle E-Business Suite: Oracle® E-Business Suite Release 12.1 provides organizations of all sizes, across all industries and regions, with a global business foundation that helps them reduce costs and increase productivity through a portfolio of rapid value solutions, integrated business processes and industry-focused solutions. The latest version of the Oracle E-Business Suite was designed to help organizations make better decisions and be more competitive by providing a global or holistic view of their operations. Abu Dhabi Media Company, Agilysis, C3 Business Solutions, Chicago Public Schools, Datacard Group, Guidance Software, Leviton Manufacturing, McDonald's, MINOR International, Usana Health Sciences, Zamil Plastic Industries Ltd. and Zebra Technologies are just a few of the organizations that have deployed the latest release of the Oracle E-Business Suite to help them make better decisions and be more competitive, while lowering costs and increasing performance. Customer Speaks "Leviton Manufacturing makes a very diverse line of products including electrical devices and data center products that we sell globally. We upgraded to the latest version of the Oracle E-Business Suite Release 12.1 to support our service business with change management, purchasing, accounts payable, and our internal IT help desk," said Bob MacTaggart, CIO of Leviton Manufacturing. "We consolidated seven Web sites that we used to host individually onto iStore. In addition, we run a site, using the Oracle E-Business Suite configurator, pricing and quoting for our sales agents to do configuration work. This site can now generate a complete sales proposal using Oracle functionality; we actually generate CAD drawings - the actual drawings themselves - based on configuration results. It used to take six to eight weeks to generate these drawings and now it's all done online in an hour to two hours by our sales agents themselves, totally self-service. It does everything they need. From our point of view that is a major business success. Not only is it a very cool, innovative application, but it also puts us about two years ahead of our competition."

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • Warm Reception By Partners at EMEA Manageability Forum

    - by Get_Specialized!
    For the EMEA Partners that were able to attend the event in Istanbul Turkey, thank you for your attendance and feedback at the event. As you can see, the weather kept most of inside during the event and at times there was even some snow.  And while it may have been chilly outside, there was a warm reception from Partners who traveled from all over EMEA to hear from other Oracle Specialized Partners and subject matter experts about the opportunities and benefits of Oracle Enterprise Manager and Exadata Specialization. Here you can see David Robo, Oracle Technology Director for Manageability kicking off the event followed later by Patrick Rood, Oracle Indirect Manageability Business. A special thank you to all the Partner speakers including Ron Tolido, VP and CTO of Application Services Continental Europe Capgemini, who delivered a very innovative keynote where many in attendance learned that Black Swans do exist. And while at break, interactivity among partners continued and it was great to see such innovative partners who had listed their achieved specializations on their business cards. Here we can see Oracle Enterprise Manager customer, Turkish Oracle User Group board member and Blogger Gokhan Atil sharing his product experiences with others attending. Additionally, Christian Trieb of Paragon Data, also shared with other Partners what the German Oracle User Group (DOAG) was doing around manageability and invitation to submit papers for their next event. Here we can see at one of the breaks, one of the event organizers Javier Puerta (left), Oracle Director of Partner Programs, joined by Sebastiaan Vingerhoed (middle), Oracle EE & CIS Manager Manageability and speaker on Managing the Application Lifecycle, Julian Dontcheff (right), Global Head of Database Management at Accenture. Below is Julian Dontcheff's delivering his partner presentation on Exadata and Lifecycle Management. Just after his plane landed and 1 hour Turkish taxi experience to the event location, Julian still took the time to sit down with me and provide some extra insights on his experiences of managing the enterprise infrastructure with Oracle Enterprise Manager. Below is one of the Oracle Enterprise Management Product Management Team,  Mark McGill, Oracle Principal Product Manager, presenting to Partners on how you can perform Chargeback and Metering with Oracle Enterprise Manager 12c Cloud Control. Overall, it was a great event and an extra thank you to those OPN Specialized Partners who presented, to the Partners that attended, and to those Oracle team members who organized the event and presented.

    Read the article

  • Introducing Glimpse – Firebug for your server

    - by Neil Davidson
    Here at Red Gate, we spend every waking hour trying to wow .NET and SQL developers with great products.  Every so often, though, we find something out in the wild which knocks our socks off by taking “ingeniously simple” to a whole new level.  That’s what a little community led by developers Nik Molnar and Anthony van der Hoorn has done with the open source tool Glimpse. Glimpse describes itself as ‘Firebug for the server.’  You drop the NuGet package into your ASP.NET project, and then — like magic* — your web pages will bare every detail of their execution.  Even by our high standards, it was trivial to get running: if you can use NuGet, you’re already there. You get all that lovely detail without changing any code. Our feelings go beyond respect for the developers who designed and wrote Glimpse; we’re thrilled that Nik and Anthony have come to work for Red Gate full-time. They’re going to stay in control of the project and keep doing open source development work on Glimpse.  In the medium term, we’re hoping to make paid-for products which plug into the free open source framework, especially in areas like performance profiling where we already have some deep technology.  First, though, Glimpse needs to get from beta to a v1. Given the breakneck pace of new development, this should only be a month or so away. Supporting an open source project is a first for Red Gate, so we’re going to be working with Nik and Anthony, with the Glimpse community and even with other vendors to figure out what ‘great’ looks like from the a user perspective.  Only one thing is certain: this technology deserves a wider audience than the 40,000 people who have already downloaded it, so please have a look and tell us what you think. You can hear more about what the Glimpse developers think on the Glimpse blog, and there are plenty more technical facts over at our product manager’s blog. If you have any questions or queries, please tweet with the #glimpse hashtag or contact the Glimpse team directly on [email protected]. [*That’s ”magic” in the Arthur C. Clarke “sufficiently advanced technology” sense, of course] Neil Davidson co-founder and Joint CEO Red Gate Software http://twitter.com/neildavidson    

    Read the article

  • SQL Developer Debugging, Watches, Smart Data, & Data

    - by thatjeffsmith
    After presenting the SQL Developer PL/SQL debugger for about an hour yesterday at KScope12 in San Antonio, my boss came up and asked, “Now, would you really want to know what the Smart Data panel does?” Apparently I had ‘made up’ my own story about what that panel’s intent is based on my experience with it. Not good Jeff, not good. It was a very small point of my presentation, but I probably should have read the docs. The Smart Data tab displays information about variables, using your Debugger: Smart Data preferences. You can also specify these preferences by right-clicking in the Smart Data window and selecting Preferences. Debugger Smart Data Preferences, control number of variables to display The Smart Data panel auto-inspects the last X accessed variables. So if you have a program with 26 variables, instead of showing you all 26, it will just show you the last two variables that were referenced in your program. If you were to click on the ‘Data’ debug panel, you’ll see EVERYTHING. And if you only want to see a very specific set of values, then you should use Watches. The Smart Data Panel As I step through the code, the variables being tracked change as they are referenced. Only the most recent ones display. This is controlled by the ‘Maximum Locations to Remember’ preference. Step through the code, see the latest variables accessed The Data Panel All variables are displayed. Might be information overload on large PL/SQL programs where you have many dozens or even hundreds of variables to track. Shows everything all the time Watches Watches are added manually and only show what you ask for. Data on Demand – add a watch to track a specific variable Remember, you can interact with your data If you want to do more than just watch, you can mouse-right on a data element, and change the value of the variable as the program is running. This is one of the primary benefits to debugging over using DBMS_OUTPUT to track what’s happening in your program. Change the values while the program is running to test your ‘What if?’ scenarios

    Read the article

  • More details on America's Cup use of Oracle Data Mining

    - by charlie.berger
    BMW Oracle Racing's America's Cup: A Victory for Database Technology BMW Oracle Racing's victory in the 33rd America's Cup yacht race in February showcased the crew's extraordinary sailing expertise. But to hear them talk, the real stars weren't actually human. "The story of this race is in the technology," says Ian Burns, design coordinator for BMW Oracle Racing. Gathering and Mining Sailing DataFrom the drag-resistant hull to its 23-story wing sail, the BMW Oracle USA trimaran is a technological marvel. But to learn to sail it well, the crew needed to review enormous amounts of reliable data every time they took the boat for a test run. Burns and his team collected performance data from 250 sensors throughout the trimaran at the rate of 10 times per second. An hour of sailing alone generates 90 million data points.BMW Oracle Racing turned to Oracle Data Mining in Oracle Database 11g to extract maximum value from the data. Burns and his team reviewed and shared raw data with crew members daily using a Web application built in Oracle Application Express (Oracle APEX). "Someone would say, 'Wouldn't it be great if we could look at some new combination of numbers?' We could quickly build an Oracle Application Express application and share the information during the same meeting," says Burns. Analyzing Wind and Other Environmental ConditionsBurns then streamed the data to the Oracle Austin Data Center, where a dedicated team tackled deeper analysis. Because the data was collected in an Oracle Database, the Data Center team could dive straight into the analytics problems without having to do any extract, transform, and load processes or data conversion. And the many advanced data mining algorithms in Oracle Data Mining allowed the analytics team to build vital performance analytics. For example, the technology team could remove masking elements such as environmental conditions to give accurate data on the best mast rotation for certain wind conditions. Without the data mining, Burns says the boat wouldn't have run as fast. "The design of the boat was important, but once you've got it designed, the whole race is down to how the guys can use it," he says. "With Oracle database technology we could compare the incremental improvements in our performance from the first day of sailing to the very last day. With data mining we could check data against the things we saw, and we could find things that weren't otherwise easily observable and findable."

    Read the article

  • Customer Support Spotlight: Clemson University

    - by cwarticki
    I've begun a Customer Support Spotlight series that highlights our wonderful customers and Oracle loyalists.  A week ago I visited Clemson University.  As I travel to visit and educate our customers, I provide many useful tips/tricks and support best practices (as found on my blog and twitter). Most of all, I always discover an Oracle gem who deserves recognition for their hard work and advocacy. Meet George Manley.  George is a Storage Engineer who has worked in Clemson's Data Center all through college, partially in the Hardware Architecture group and partially in the Storage group. George and the rest of the Storage Team work with most all of the storage technologies that they have here at Clemson. This includes a wide array of different vendors' disk arrays, with the most of them being Oracle/Sun 2540's.  He also works with SAM/QFS, ACSLS, and our SL8500 Tape Libraries (all three Oracle/Sun products). (pictured L to R, Matt Schoger (Oracle), Mark Flores (Oracle) and George Manley) George was kind enough to take us for a data center tour.  It was amazing.  I rarely get to see the inside of data centers, and this one was massive. Clemson Computing and Information Technology’s physical resources include the main data center located in the Information Technology Center at the Innovation Campus and Technology Park. The core of Clemson’s computing infrastructure, the data center has 21,000 sq ft of raised floor and is powered by a 14MW substation. The ITC power capacity is 4.5MW.  The data center is the home of both enterprise and HPC systems, and is staffed by CCIT staff on a 24 hour basis from a state of the art network operations center within the ITC. A smaller business continuance data center is located on the main campus.  The data center serves a wide variety of purposes including HPC (supercomputing) resources which are shared with other Universities throughout the state, the state's medicaid processing system, and nearly all other needs for Clemson University. Yes, that's no typo (14,256 cores and 37TB of memory!!! Thanks for the tour George and thank you very much for your time.  The tour was fantastic. I enjoyed getting to know your team and I look forward to many successes from Clemson using Oracle products. -Chris WartickiGlobal Customer Management

    Read the article

  • ATG Live Webcast: Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond

    - by BillSawyer
    I am pleased to announce the next ATG Live Webcast event on December 1, 2011. Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond Are you still on 11i and wondering about your next steps in the E-Business Suite lifecycle? Are you wondering what the upgrade considerations are going to be for 12.2? Do you want to know the best practices for upgrading E-Business Suite regardless of your version? If so, this is the webcast for you. Join Anne Carlson, Senior Director, Oracle E-Business Suite Product Strategy for this one-hour webcast with Q&A. This session will give you a framework to make informed upgrade decisions for your E-Business Suite environment. This event is targeted to functional managers, EBS project planners, and implementers. The agenda for the Planning Your Oracle E-Business Suite Upgrade from 11i to 12.1 and Beyond includes the following topics: Business Value of the Upgrade Starting Your Upgrade Project Planning Your Upgrade Approach Preparing for Your Upgrade Execution Extended Support for 11i Additional Resources Date:            Thursday, December 1, 2011Time:           8:00 AM - 9:00 AM Pacific Standard TimePresenter:  Anne Carlson, Senior Director, Oracle E-Business Suite Product StrategyWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              98515To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  271378459 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training at http://blogs.oracle.com/stevenChan/entry/e_business_suite_technology_learningIf you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Thanos Terentes Printzios
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application. Coherence GAR archives and Coherence Managed Servers are now first-class citizens of all JEE applications and Oracle WebLogic domains respectively. That enables even easier development, deployment and management of complex multi-tier enterprise applications powered by data grid rich features. Oracle Coherence 12c makes your solution ready for the future of big data and always-on-line world. This webcast is focused on demonstrating How to create a Coherence Application using Oracle Enterprise Pack for Eclipse 12.1.2.1.1 (Kepler release). How to package the application in form of GAR archive inside the EAR deployable application. How to deploy the application to multi-tier WebLogic clusters. How to define and configure the WebLogic domain for the tiered clusters hosting both data grid and client JEE applications.  Finally we will expose the data in grid to external systems using REST services and create a simple web interface to the underlying data using Oracle ADF Faces components. Join us on this technology webcast, to find out more about how Oracle Cloud Application Frameworks brings together the key industry leading technologies of Oracle Coherence and Weblogic 12c, delivering next-generation applications. Agenda: Introduction to Oracle Coherence What's new in 12c release POF annotations Live Events Elastic Data (Flash storage support) Managed Coherence Servers for Oracle WebLogic Coherence Applications (Grid Archive) Live Demonstration Creating and configuring Coherence Servers forming the data tier cluster Creating a simple Coherence Grid Application in Eclipse Adding REST support and creating simple ADF Faces client application Deploying the grid and client applications to separate tiers in WebLogic topology HA capabilities of the data tier Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >