Search Results

Search found 1068 results on 43 pages for 'eye of the storm'.

Page 31/43 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Fonts look squashed or stretched in the browser on Ubuntu

    - by Arjun Menon
    Fonts in the browser on Ubuntu look look squashed/stretched compared to Windows/OSX. This image shows exactly what I mean: http://i.stack.imgur.com/suUXX.png I installed msttcorefonts and configured both Chrome & FF to use Microsoft fonts (Arial, Times New Roman) instead of the default ones. While MS fonts made web pages appear a bit different, regardless of what font it was the squashed/stretched look remained. FreeSans looks a little different from Arial, but it too is rendered squashed/stretched like Arial on both FF & Chrome. Opera renders the Wikipedia page differently from FF & Chrome, but the fonts looks squashed/stretched on it as well. I used to run Kubuntu prior to switching to Ubuntu and at some point I managed to get the fonts on Chrome (only Chrome) look exactly like in the image on the left. I have no idea how I did it though. Firefox and Rekonq retained the squashed/stretched look. I had been using Rekonq for a while, then switched to FF. While using both browsers I had done various things to get the fonts to look better on them with no success - like installing MS fonts & configuring both browsers to use them. I then, after some time, installed Chrome and the fonts magically looked perfect on them - just like on right-hand side of the image. In fact, the font smoothing looked better (to my eye) compared to Windows and OSX. All 3 OSes use subtly different font smoothing strategies and the differences stand out. Later, I formatted & installed Ubuntu 12.04. The first thing I did was install msttcorefonts & then install Chrome. To my dismay, the fonts on Chrome looked just as squashed/stretched as it did in Firefox. There's no browser (except Wine Internet Explorer) that renders fonts properly on my Ubuntu setup right now. Fixing this is definitely possible, since I was able to do it on Kubuntu, but apparently it requires some mysterious tweaking. Would anyone be willing to help me out?

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • Educause Top-Ten IT Issues - the most change in a decade or more

    - by user739873
    The Education IT Issue Panel has released the 2012 top-ten issues facing higher education IT leadership, and instead of the customary reshuffling of the same deck, the issues reflect much of the tumult and dynamism facing higher education generally.  I find it interesting (and encouraging) that at the top of this year's list is "Updating IT Professionals' Skills and Roles to Accommodate Emerging Technologies and Changing IT Management and Service Delivery Models."  This reflects, in my view, the realization that higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia. What follows in the remaining 9 top issues all speak, in some form or fashion, to the need for dramatic change, but not just in the areas of "funding IT" (code for cost containment or reduction), but rather the need to increase effectiveness and efficiency of the institution through the use of technology—leveraging the wave of BYOD (Bring Your Own Device) to the institution's advantage, rather than viewing it as a threat and a problem to be contained. Although it's #10 of 10, IT Governance (and establishment and implementation of the governance model throughout the institution) is key to effectively acting upon many of the preceding issues in this year's list.  In the majority of cases, technology exists to meet the needs and requirements to effectively address many of the challenges outlined in top-ten issues list. Which brings me to my next point. Although I try not to sound too much like an Oracle commercial in these (all too infrequent) blog posts, I can't help but point out how much confluence there is between several of the top issues this year and what my colleagues and I have been evangelizing for some time. Starting from the bottom of the list up: 1) I'm gratified that research and the IT challenges it presents has made the cut.  Big Data (or Large Data as it's phased in the report) is rapidly going to overwhelm much of what exists today even at our most prepared and well-equipped research universities.  Combine large data with the significantly more stringent requirements around data preservation, archiving, sharing, curation, etc. coming from granting agencies like NSF, and you have the brewing storm that could result in a lot of "one-off" solutions to a problem that could very well be addressed collectively and "at scale."   2) Transformative effects of IT – while I see more and more examples of this, there is still much more that can be achieved. My experience tells me that culture (as the report indicates or at least poses the question) gets in the way more than technology not being up to task.  We spend too much time on "context" and not "core," and get lost in the weeds on the journey to truly transforming the institution with technology. 3) Analytics as a key element in improving various institutional outcomes.  In our work around Student Success, we see predictive "academic" analytics as essential to getting in front of the Student Success issue, regardless of how an institution or collections of institutions defines success.  Analytics must be part of the fabric of the key academic enterprise applications, not a bolt-on.  We will spend a significant amount of time on this topic during our semi-annual Education Industry Strategy Council meeting in Washington, D.C. later this month. 4) Cloud strategy for the broad range of applications in the academic enterprise.  Some of the recent work by Casey Green at the Campus Computing Survey would seem to indicate that there is movement in this area but mostly in what has been termed "below the campus" application areas such as collaboration tools, recruiting, and alumni relations.  It's time to get serious about sourcing elements of mature applications like student information systems, HR, Finance, etc. leveraging a model other than traditional on-campus custom. I've only selected a few areas of the list to highlight, but the unifying theme here (and this is where I run the risk of sounding like an Oracle commercial) is that these lofty goals cry out for partners that can bring economies of scale to bear on the problems married with a deep understanding of the nuances unique to higher education.  In a recent piece in Educause Review on Student Information Systems, the author points out that "best of breed is back". Unfortunately I am compelled to point out that best of breed is a large part of the reason we have made as little progress as we have as an industry in advancing some of the causes outlined above.  Don't confuse "integrated" and "full stack" for vendor lock-in.  The best-of-breed market forces that Ron points to ensure that solutions have to be "integratable" or they don't survive in the marketplace. However, by leveraging the efficiencies afforded by adopting solutions that are pre-integrated (and possibly metered out as a service) allows us to shed unnecessary costs – as difficult as these decisions are to make and to drive throughout the organization. Cole

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • What is the rationale behind snazzy Window Managers/Composers?

    - by Emanuele
    This is more of a generic question, based on trying out Window Managers like Awesome, Mate and others. To me looks like that other Window Managers like Gnome3 and/or Unity are heavy and pointless. I do understand that having all the composed UIs is more pleasant for the eye, but apart that, what are the other major benefits? To make an example, when I run the game Heroes of Newerth (using nVidia drivers) under: Unity : the FPS drops sharply Gnome3 : FPS is ok, but X and other processes use 15~20% of CPU and quite some additional memory Awesome : FPS is ok, and other processes use very little memory and CPU Below some numbers regarding what I'm saying (please note my system is 64 bit, AMD Phenom II X4, 8 GB RAM, nd nVidia 470 GTX, SSD disk). All data is sorted by mem usage (watch -d -n 10 "ps -e -o pcpu,pmem,pid,user,cmd --sort=-pmem | head -20"); again note that CPU time of ./hon-x86_64 might be different due to the fact I can't take the snapshot of the system during exactly same time. Awesome: %CPU %MEM PID USER CMD 91.8 21.6 3579 ema ./hon-x86_64 2.4 0.9 3223 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.6 0.4 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinp 0.3 0.2 3602 ema gnome-terminal 0.0 0.2 2698 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Gnome3: %CPU %MEM PID USER CMD 82.7 21.0 5528 ema ./hon-x86_64 17.7 1.7 5315 ema /usr/bin/gnome-shell 5.8 1.2 5062 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.0 0.4 5657 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.7 0.3 5331 ema nautilus -n 1.6 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- - 0.9 0.2 5451 ema gnome-terminal 0.1 0.2 5400 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Unity 3D: %CPU %MEM PID USER CMD 87.2 21.1 6554 ema ./hon-x86_64 10.7 2.6 6105 ema compiz 17.8 1.1 5842 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.3 0.9 6672 root /usr/bin/python /usr/sbin/aptd 0.4 0.4 6606 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.5 0.3 6115 ema nautilus -n 1.5 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinput -sasl errl 0.3 0.2 6180 ema /usr/lib/unity/unity-panel-service So my point is, what's the rationale behind going towards such heavy WMs/Composers?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Our Favorite Highlights from OpenWorld 2012

    - by Kathy.Miedema
    By Kathy Miedema and Misha Vaughan, Oracle Applications User Experience The Oracle Applications User Experience (UX) team’s activities around OpenWorld expand every year, but this year we certainly raised the bar.   Members of our team helped deliver three, separate, all-day training events in the week prior to OpenWorld. Our Fusion User Experience Advocates (FXA) and Applications UX Sales Ambassadors (SAMBA) have all-new material around the Oracle user experience to deliver at conferences in the coming year - Fusion Applications design patterns, mobile design patterns, and the new face of Fusion. We also delivered a hands-on workshop sharing user experience tools for our customers that is designed to answer this question: "If I have no UX staff, what do I do?" We also spent the weeks just before OpenWorld preparing to talk about the new face of Fusion Applications, a greatly simplified entry experience into Fusion Applications for self-service users, CRM users, and IT managers who want to change the look and feel quickly. Special thanks to Oracle ACE Director Floyd Teter for the first mention of our project.Jeremy Ashley, VP, Oracle Applications User Experience Customers may have seen one of the many OpenWorld session demos of the new face of Fusion, which will be available with Fusion Applications soon. It was shown in sessions by Oracle's Chris Leone, Anthony Lye, and our own Vice President, Jeremy Ashley, among others.   Leone reinforced the importance of user experience as one of three main design principles for Fusion Applications, emphasizing that Fusion was designed from the beginning to be intelligent, social, and mobile. User experience highlights of the new face of Fusion, he said, included the need for "zero training," and he called the experience "easy to use." He added that deploying it for HCM self-service would be effortless.  Customers take part in a usability lab tour during OpenWorld 2012. Customers also may have seen the new face of Fusion on the demogrounds or during one of our teams' chartered lab tours at the end of the week. We tested other new designs at our on-site lab in the Intercontinental Hotel, next to Moscone West. Applications User Experience team members show eye-tracking and mobile demos at OOW. We were also excited to kick off new branches of the Oracle Usability Advisory Board, which now has groups in Latin America and the Middle East, in addition to North America and EMEA.   And we were pleasantly surprised by the interest in one of our latest research projects, Oracle Voice, which is designed to enable faster data input for on-the-go users. We offer a big thank-you to the Nuance demopod for sharing the demo with OpenWorld attendees.  For more information on our program and products like the new face of Fusion, please comment below. 

    Read the article

  • Oracle Partner Architects Training

    - by mseika
    Dear Oracle Partner, There is a lot more to Oracle technology than meets the eye. Sure, you already belong to a small circle of our most experienced and committed partners. But are you making the best use possible of our technology solutions? Put it to the test.  Join the “Oracle Partner Architects Training”. It is aimed at providing your experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Seasoned speakers, exclusive content and no product marketing. Oracle technology beyond the obvious. Choose from any of the 40 recorded training sessions. Topics include:  • Security• Service integration • Database and options• Data integration • BI and applications• Applications and infrastructure• Hardware and software combinations The market and Oracle value specialized partners More information about specialization can be found on opn.oracle.com. Click through to OPN Program/Specialize “What’s in it for us?” Quite simply: the opportunity to gain the differentiation and competitive edge you need to stand out in the marketplace. • Differentiate your company through expertise in leading Oracle IT solutions;• Get your experts, architects and consultants up to speed on specialized services and solutions;• Make our customers’ shortlists. They are looking for value-added solutions for their business.   Recordings All sessions are recorded. After registering for a session in oraevents, you will receive the info to access the webex recording. Your timing, your tempo.  Registration and more information Visit architects.oraevents.eu to sign up for the recorded sessions. NOTE: Looking to get your consultants Oracle certified? One more reason to join the Oracle Partner Architects Training. It is the fast track to getting their expertise validated with an Oracle certificate. Training schedule  Choose from any of the 40 recorded training sessions: SECURITY THE PRACTICAL APPROACH •  Identity governance• Access management• Data privacy and protection• End-to-end security, layers of exposures•  Identity & access management, why and where to start?• Data security, how? SERVICE INTEGRATION A NEW ROADTO ENTERPRISE-WIDE SERVICE INTEGRATION • Oracle RUEI: maximize business value by insight into real end-user experiences•  Governance challenges in the services landscape•  Creating an agile enterprise (by Jeff Davies)• Oracle’s approach to SOA (by Jeff Davies) - guiding and accelerating SOA success• Technical case study – the SOA challenge• Oracle’s unified business process management suite 11g (incl. demo) DATABASE DATABASE AND OPTIONS, GOINGWIDE •  Understanding service level agreements for databases• Database lifecycle management• Data centric information lifecycle management DATA INTEGRATION  DIS FOR ARCHITECTS • Data integration solutions: an overview• ODI and goldengate• Data quality

    Read the article

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • S11 launched

    - by unixman
    Now that Oracle Solaris 11 is out, its time to do 2 things -- 1) Its time to see what's in it, what's new and why its important, and then assess why it might make sense to begin evaluating it for your needs and 2) Its time to acknowledge, give thanks to and congratulate all the R&D personnel, architects, engineers, designers and testers who've put in so much effort and energy into helping make Solaris 11 (and SunOS 5.11) what it has become -- starting way back circa 2004 and, more importantly, culminating in the recent years and months -- staying focused on the execution, unwavering in the face of various challenges. For #1 above, here are a few good things to get going with - Watch the product launch replay - Visit the Solaris 11 Spotlight section on oracle.com - Get comfortable through introductory videos and detailed "how-to" guides (ex: how to create and publish IPS packages), white papers on the new default root file system, ZFS, and reap the benefits brought on by the fundamental shift in easing the administration experience - Look at the next level of software lifecycle management that is enabled by technologies such as Automated Installer and Image Packaging System -- that dramatically address patch management-related challenges - Understand how we continue to innovate in areas of service intelligence, reliability and availability - Start to evaluate enhancements in virtualization capabilities -- whether influenced by the need to consolidate or motivated by the need to have increased service mobility across physical systems, leveraging hardware-level abstractions - Gain more control over your network-centric services through enhancements in network resource management, observability and I/O performance - Look beyond your existing infrastructure with confidence that you can re-host and transition to newer systems with the use of Solaris 10 zones running on top of Solaris 11 - Relish in the fact that you can do all this, get your data to be secure and encrypted and more, on both, SPARC and x86-based systems. - Stay informed by keeping an eye on relevant blogs, which we've begun turning up recently. - Go through a hands-on lab - Sign up to take a class or just opt to watch various videos to begin to raise your comfort level with these technologies For #2 above -- There are many ways to do that. One way is to just say "thanks" with an email, a post, or a simple card,  similar to this one seen at a Barnes and Noble store recently.  The front of the card is followed by what's inside... and as the saying goes, now more then ever "it's what's inside that counts" And here's the inside of the card: So, what are you waiting for ? Go download and try it out, and please let us know what you think of it!

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • Are they asking too much of me?

    - by Tesserex
    Or am I just whining? Background: I work for a "startup," which I put in air quotes because the company has been around for 4 years. We have about 40 employees in three offices, 9 here plus some part time. We have a good amount of investment and bring in about 75% of what we spend (so not profitable just yet.) Standard work week is supposed to be about 60 hours, but they justify that as we have to be online when our international (Taiwan and Vietnam) offices are awake. When I started the job 6 months ago, I spent about a month prototyping an iphone app and did really well on my own. They also found out about my facebook applications and how many users they got. Putting 2 and 2 together (and winding up at -7) they realized 1. I'm independent and innovative (because I was able to use stackoverflow to answer my iOS questions instead of bugging my superiors) and 2. I must have an eye for marketing (since my fb apps grew totally organically without me doing any advertising), and assigned me to a project optimizing adwords campaigns. Today I got reviewed, and then chewed out, by our CEO for not totally rocking this project. Now I thought I was doing ok, but the CEO said the project is stagnant and they're expecting more from me. But since it's a startup, they play loose with job roles and I've had plenty of other things to do in the past three months. Every time I ask what's most important, I get conflicting responses depending who I ask, and the end result is that almost everything has equal priority - high. I could go on about how I don't think adwords is worthwhile for us since our profit margin is so slim, and how we should be trying to improve our website first, but that's not the point. I also have explained to the office director (who originally assigned me the project, not the CEO) that I don't actually know anything about marketing, I'm just a decent programmer, but they think my general smarts will prove capable of tackling this challenge. The CEO also clarified that he wants a more technical and algorithmic approach to the problem. So is there something I can do to address this? Combined with my existing and confusing workload, should I be raising an issue? Or should I do the grown up thing and give it my all, asking for help when I need it and hoping for the best? Sorry if this is very rant-ish.

    Read the article

  • I didn't mean to become a database developer, but now I am. Should I stop or try to get better?

    - by pretlow majette
    20 years ago I couldn't afford even a cheap POS program when I opened my first surf shop in the Virgin Islands. I bought a copy of Paradox (remember that?) in 1990 and spent months in a back room scratching out a POS application. Over many iterations, including a switch to Access (2000)/SQL Server (2003), I built a POS and backoffice solution that runs four stores with multiple cash registers, a warehouse and office. Until recently, all my stores were connected to the same LAN (in a small shopping center) and performance wasn't an issue. Now that we've opened a location in the States that's changed. Connecting to my local server via the internet has slowed that locations application to a crawl. This is partly due to the slow and crappy dsl service we have in the Virgin Islands, and partly due to my less-than-professional code and sql. With other far-away stores in the works, I need a better solution. I like my application. My staff knows it well, and I'm not inclined to take on the expense of a proper commercial solution. So where does that leave me? I should probably host my sql online to sidestep the slow dsl here. I think I can handle cleaning up my SQL querries to speed that up a bit. What about Access? My version seems so old, but I don't like the newer versions with the 'ribbon'. There are so many options... Should I be learning Visual Studio with an eye on moving completely to the web? Will my VBA skills help me at all there? I don't have the luxury of a year at the keyboard to figure it out anymore. What about dotnetnuke, sharepoint, or lightswitch? They all seem like possibilities, but even understanding their capabilities is daunting. I'm pretty deep into it, but maybe I should bail and hire a consultant or programmer. That sounds expensive tho, and there's no guarantee there either... Any advice would be greatly appreciated. Or, if anybody is interested in buying a small chain of surf shops...

    Read the article

  • New Features Of WordPress 3.3 You Must Know

    - by Gopinath
    After months of beta testing, WordPress 3.3 version is going to be released at the end of this month. There are several new features packed in the new version and few of them are going to excite WordPress admins. In this post we are going to discuss about the exciting new features. 1. Drag and Drop Media Uploads One of the biggest improvements in this version of WordPress is it’s all new media uploader. Now you can upload multiple files by just dragging & dropping, instantly resize  the images and filter files by their type. The media upload sports a brand new look WordPress adopted the Pupload plugin to power its media uploader component and it’s written by the same team who created the popular TinyMCE editor plugin. 2. Improved Admin Bar(Toolbar) The admin bar or newly called toolbar has got handful of makeovers. The not so much used items like Search box and other elements are removed to make sure that the bar is not clumsy. The user menu and the related options are moved to the right like how we see in Google’s user bar. Also there are few changes to the colour of the bar to make it more eye friendly. 3. Fly out Admin Menus All the left side bar menus of WordPress admin are now sports a fly out menu style to save a click. In the previous versions if you want to access a sub menu on the left side bar, you need to first click on the category and then choose the menu item from the expanded list. Now on just mouse over you will see a flyout of menu items. 4. Adaptive Admin – Layout Auto Adjust To Fit Various Devices If you own an iPad or any other so called tablets then you are going to love this feature. The admin site of WordPress has got a lot more friendly with tablets and smartphones. WordPress now auto adjusts layout to fit the device through which you are accessing the admin site.  Accessing admin dashboard on your tablets is going to be more fun. 5. Other Features Now that we have read the most useful 4 features here is a small list of other features that may interest you Nice Tooltips are displayed where ever possible to help the newbies to understand the usage of admin site Responsive Layouts jQuery 1.7 and jQuery UI 1.8.16 are the power horses of WordPress Performance improvements This article titled,New Features Of WordPress 3.3 You Must Know, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How do you conquer the challenge of designing for large screen real-estate?

    - by Berin Loritsch
    This question is a bit more subjective, but I'm hoping to get some new perspective. I'm so used to designing for a certain screen size (typically 1024x768) that I find that size to not be a problem. Expanding the size to 1280x1024 doesn't buy you enough screen real estate to make an appreciable difference, but will give me a little more breathing room. Basically, I just expand my "grid size" and the same basic design for the slightly smaller screen still works. However, in the last couple of projects my clients were all using 1080p (1920x1080) screens and they wanted solutions to use as much of that real estate as possible. 1920 pixels across provides just under twice the width I am used to, and the wide screen makes some of my old go to design approaches not to work as well. The problem I'm running into is that when presented with so much space, I'm confronted with some major problems. How many columns should I use? The wide format lends itself to a 3 column split with a 2:1:1 split (i.e. the content column bigger than the other two). However, if I go with three columns what do I do with that extra column? How do I make efficient use of the screen real estate? There's a temptation to put everything on the screen at once, but too much information actually makes the application harder to use. White space is important to help make sense of complex information, but too much makes related concepts look too separate. I'm usually working with web applications that have complex data, and visualization and presentation is key to making sense of the raw data. When your user also has a large screen (at least 24"), some information is out of eye sight and you need to move the pointer a long distance. How do you make sure everything that's needed stays within the visual hot points? Simple sites like blogs actually do better when the width is constrained, which results in a lot of wasted real estate. I kind of wonder if having the text box and the text preview side by side would be a big benefit for the admin side of that type of screen? (1:1 two column split). For your answers, I know almost everything in design is "it depends". What I'm looking for is: General principles you use How your approach to design has changed I'm finding that i have to retrain myself how to work with this different format. Every bump in resolution I've worked through to date has been about 25%: 640 to 800 (25% increase), 800 to 1024 (28% increase), and 1024 to 1280 (25% increase). However, the jump from 1280 to 1920 is a good 50% increase in space--the equivalent from jumping from 640 straight to 1024. There was no commonly used middle size to help learn lessons more gradually.

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Experiences with BIRD for BGP?

    - by Shtééf
    We're currently using Quagga with Debian Linux to run a full table BGP router. The set-up has been dead simple up to now, but we've come to a point where I have to reconfigure the router quite a bit, and want to tighten things up. I've never really understood Quagga, and always found its documentation to be lacking. It appears to be mimicking Cisco, of which I only have basic understanding. BIRD has caught my eye recently. The couple of articles / presentations I found promote it as lightweight and more responsive under stress compared to Quagga. And it actually seems to have very decent documentation. So I'd like to know: Who's running BIRD right now, and in what kind of set-up? How is it stability-wise? I've read about it running in a couple of sites in production. Let's say I don't care at all for a Cisco-feel to configuration. How is configuration, maintainance, monitoring, etc. of BIRD in general? And any other notable experiences you may have with it.

    Read the article

  • Don't let the mouse wake up displays from standby

    - by progo
    I like to put my displays to powersave/standby mode when I leave the computer for a while. It would be ok if it weren't for oversensitive mouse. Sometimes the driver reads in some movement that's not visible to the naked eye (the cursor, that is) and it breaks the power save. It would wait for another 10 minutes before going back to its standby. My workaround is the following script bound to C-S-q: xlock -startCmd 'xset dpms 2 2 2' -endCmd 'xset dpms 600 1200 1300' -mode blank -echokeys -timeelapsed +usefirst By using xset I set the values to 2 seconds each before going to standby. It's not nice, anyway. Sometimes there are cool fortunes that I want to read before typing in the password. I could keep the cursor moving but it's cludgy. (By the way, xlock's option mousemotion doesn't help -- it just hides the cursor but the displays fire up nevertheless.) So the question: is there a way to make displays go standby and stay there until a keyboard key is pressed? I'm running gentoo and recent Xorg, but I hope the answer doesn't have to be distro-specific. Basically the answer can be as simple as how to enable/disable mouse within command line? It think that would do the job if DPMS doesn't know the idea.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >