Search Results

Search found 2529 results on 102 pages for 'term'.

Page 57/102 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Why is Rhythmbox becoming the default (again)?

    - by Christoph
    So, it seems with 12.04, they're switching back to Rhythmbox, after switching from Rhythmbox a year ago. I don't get why. They say that it's because of a blocking bug in GTK3# (if I understand that correctly), but that's just one bug, and in the same breath they say RB is not well maintained. It seems Ubuntu guys were dissatisfied with Banshee in some way, but apparently the Banshee guys were never notified of any problems. Also, it can't be to save disc space by dropping mono, because at the same day it was announced that the install disc will be enlarged by 50MB. Also, isn't it a bit shortsighted to push Banshee for default inclusion, and then drop it again a year later? How is that a sustainable use of dev resources, or consistent? Apparently there was quite some heavy effort by banshee devs - David Nielsen used the term "bending over backwards for Ubuntu" iirc. In summary: Can anyone shed more light on this? Related question: Why is Banshee becoming the default? Sources: http://www.omgubuntu.co.uk/2011/11/banshee-tomboy-and-mono-dropped-from-ubuntu-12-04-cd/ http://www.omgubuntu.co.uk/2011/11/rhythmbox-to-return-as-ubuntu-12-04-default-music-app/ http://www.omgubuntu.co.uk/2011/11/ubuntu-12-04-disc-size-to-be-750mb/ http://summit.ubuntu.com/uds-p/meeting/19442/desktop-p-default-apps/ http://banshee-media-player.2283330.n4.nabble.com/banshee-being-dropped-from-ubuntu-because-of-GTK3-support-td3985298.html

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • Blogging from Office RT

    - by Dennis Vroegop
    During the last Build conference all attendees were given a brand new sparkling exciting Surface RT device (I love that machine despite its name but that's beside the point). On it came a version of Office 2013 RT, or better: the preview version. Now, I translated that term "Preview" to "Beta". Which is OK, since I've been using a lot of beta products from Microsoft and they all were great. And then I wanted to post a blogposting from Word. I knew I could, I have been doing this for a long time (I prefer Live Writer but that isn't available on Windows 8 RT). So I wrote the entry and hit "Publish". Instead of my blogsite I got a nice non-descriptive error telling me I couldn't post. So I fired up my other (Intel based) Win8 tablet, opened Word RT Preview, it loaded my blogpost (you've got to love the automatic synchronization through Skydrive) and tried from that machine. Same error. So, I installed Live Writer (remember, the other machine is Intel based) and posted from there. That worked like a charm. Apparently, there was something wrong with Word. I gave up and didn't think about it anymore. Yet… what you're reading now is written in Word 2013 RT on my Surface RT. So what did do? Simple: I updated from the Preview version to the final version. That's all there was to it. So…. If you're still on the preview I urge you to upgrade. You need to go to the "classic desktop update" window instead of going through the Windows Store App style update since Office is a desktop system, but once you do that you'll have the full version as well. Happy blogging!

    Read the article

  • Oracle Endeca Information Discovery 3.1 is Now Available

    - by p.anda
    Oracle Endeca Information Discovery (OEID) 3.1 is a major release that incorporates significant new self-service discovery capabilities for business users. These include agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI This release is available for download from: Oracle Delivery Cloud Oracle Technology Network Some of the what's new highlights ... Self-service data mashup... enables access to a wider variety of personal and trusted enterprise data sources. Blend multiple data sets in a single app. Agile discovery dashboards... allows users to easily create, configure, and securely share discovery dashboards with intelligent defaults, intuitive wizards and drag-and-drop configuration. Deeper unstructured analysis ... enables users to enrich text using term extraction and whitelist tagging while the data is live. Enhanced integration with OBI... provides easier wizards for data selection and enables OBI Server as a self-service data source. Enterprise-class data discovery... offers faster performance, a trusted data connection library, improved auditing and increased data connectivity for Hadoop, web content and Oracle Data Integrator. Find out more ... visit the OEID Overview page to download the What's New and related Data Sheet PDF documents. Have questions or want to share details for Oracle Endeca Information Discovery?  The MOS Communities is a great first stop to visit and you can stop-by at MOS OEID Community.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • Using unordered_multimap as entity and component storage

    - by natebot13
    The Setup I've made a few games (more like animations) using the Object Oriented method with base classes for objects that extend them, and objects that extend those, and found I couldn't wrap my head around expanding that system to larger game ideas. So I did some research and discovered the Entity-Component system of designing games. I really like the idea, and thoroughly understood the usefulness of it after reading Byte54's perfect answer here: Role of systems in entity systems architecture. With that said, I have decided to create my current game idea using the described Entity-Component system. Having basic knowledge of C++, and SFML, I would like to implement the backbone of this entity component system using an unordered_multimap without classes for the entities themselves. Here's the idea: An unordered_mulitmap stores entity IDs as the lookup term, while the value is an inherited Component object. Examlpe: ____________________________ |ID |Component | ---------------------------- |0 |Movable | |0 |Accelable | |0 |Renderable | |1 |Movable | |1 |Renderable | |2 |Renderable | ---------------------------- So, according to this map of objects, the entity with ID 0 has three components: Movable, Accelable, and Renderable. These component objects store the entity specific data, such as the location, the acceleration, and render flags. The entity is simply and ID, with the components attached to that ID describing its attributes. Problem I want to store the component objects within the map, allowing the map have full ownership of the components. The problem I'm having, is I don't quite understand enough about pointers, shared pointers, and references in order to get that set up. How can I go about initializing these components, with their various member variables, within the unordered_multimap? Can the base component class take on the member variables of its child classes, when defining the map as unordered_multimap<int, component>? Requirements I need a system to be able to grab an entity, with all of its' attached components, and access members from the components in order to do the necessary calculations and reassignments for position, velocity, etc. Need a clarification? Post a comment with your concerns and I will gladly edit or comment back! Thanks in advance! natebot13

    Read the article

  • How To: Modernize IBM AIX/Power To Oracle Solaris/SPARC

    - by Cinzia Mascanzoni
    Learn how to leverage the Modernizing IBM AIX/Power to Oracle Solaris/SPARC Program, to assist you in migrating IBM AIX/Power customers to the Oracle Solaris/SPARC platform.  Customers will find Oracle Solaris/SPARC solutions an ideal long-term platform, providing greatly significantly reduced capital and operational cost savings and greatly improved performance and productivity. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Cannot authenticate to SBS 2003

    - by Lerp
    I am trying to connect my machine to my work's entirely windows network and I am having a few issues: Whenever I try to access the server, the authentication dialog just keeps popping back up. I cannot connect to the printers (it says connecting to device failed) I have tried setting up samba, winbind, kerberos, likewise open all to no avail. I have a feeling I am just setting them up wrong. My nautilus shows this when I go to Network Windows Network MASTERMAGNETS I can ping both MASTERMAGNETS.LOCAL and 192.168.0.2 after modifying my /etc/hosts james@jamesmaddison:~$ cat /etc/hosts 127.0.0.1 localhost jamesmaddison 192.168.0.2 MASTERMAGNETS.LOCAL 192.168.0.50 Sharp-Printer # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters I believe that's the correct domain (not sure if that's the correct term) as when I do nslookup MASTERMAGNETS.LOCAL I get the following: james@jamesmaddison:~$ nslookup MASTERMAGNETS.LOCAL Server: 192.168.0.2 Address: 192.168.0.2#53 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.3 Name: MASTERMAGNETS.LOCAL Address: 192.168.0.2 It all worked fine before I reinstalled Ubuntu and now I just cannot get access to the server. All help is appreciated, I need to get this working or I fear I will be forced to develop in a windows environment :(

    Read the article

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

  • MATLAB: Best fitness vs mean fitness, initial range

    - by Sa Ta
    Based on the example of Rastrigin's function. At the plot function, if I chose 'best fitness', on the same graph 'mean fitness' will also be plotted. I understand well about 'best fitness' whereby it plots the best function value in each generation versus iteration number. It will reach value zero after some times. I don't understand about 'mean fitness'in the graph plotted. What do those 'mean fitness' values mean? How does the 'mean fitness' graph help to understand Rastrigin's function? What are the meaning of the term initial population, initial score and initial range? I wish to have a better understanding of these terms. The default value for initial range is [0,1]. Does it mean that 0 is the lower bound (lb) and 1 is the upper bound (ub)? Do these values interfere with the lb and ub values I set in the constraints? I try to better understand about lb and ub. If my lb is 0 and ub is 5, does it mean that my final point values will be within 0 and 5? If I know the lb and ub for my problem is between 0 and 5, do I just set the initial range as [0,5] at all times and may I assume that this is the best option for initial range, and I need not try it with any other values?

    Read the article

  • ubuntu box just redisplaying login screen after update

    - by David M. Karr
    My Ubuntu 12.04 box has been working fine. A recent update may have messed something up. I normally run remote windows on it, and I noticed that my windows were failing to start up. I then tried logging into it directly from the GUI console, and I'm seeing that after I press enter on the (valid) password, the page just redisplays. It's not a password error, as that would give me an inline error. I see some messages appear and disappear quickly between the login screen going away and then redisplaying, but they go away too quickly to read. I was able to run the non-gui login, and I did an update and upgrade, and then rebooted, but it's doing the same thing. I have a Samba connection from my Windows box, and that's still working. If it matters, here's my uname output (somewhat elided): Linux ... 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux What can I do to troubleshoot this? Note that when I select "Guest Session", it lets me log in and displays the window manager. This seems significant to me. Does this mean that something specific to my login is causing it to fail? Note: If it matters, here's the output from /var/log/dmesg. The line about gdm seems interesting: [ 9.815883] Bluetooth: RFCOMM TTY layer initialized [ 9.815887] Bluetooth: RFCOMM socket layer initialized [ 9.815888] Bluetooth: RFCOMM ver 1.11 [ 9.879088] [PCSPP,TRISTATE] [ 9.879092] parport0: irq 7 detected [ 9.883935] type=1400 audit(1341871177.871:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=845 comm="apparmor_parser" [ 9.884365] type=1400 audit(1341871177.871:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/ntpd" pid=851 comm="apparmor_parser" [ 9.950397] e1000e 0000:00:19.0: irq 42 for MSI/MSI-X [ 9.961160] init: gdm main process (907) killed by TERM signal [ 9.966358] lp0: using parport0 (polling).

    Read the article

  • Game editor integration with the engine?

    - by Daniel
    What I am trying to figure out is what is the best way to integrate the editor(level, effects, model, etc...) in the most effective way? Now the first thing I thought would be to create the game engine(*) extremely modular. For example I took the example of game states. You could have multiple game states that all have their own update() and draw() methods among others. Each game state class would inherit from a base GameState class. This allows for a more modular approach and a useful one at that. Now would the most efficient approach be to implement the editor along with the modular engine, or create two different designs for both the game, and editor? I thought to take the game state example and extend it to window states, and well could be used for a lot more systems. Is there a better implementation of this design(game state) for use in other systems used in the engine? *: Now I know the term game engine is sorta irrelevant, and misused in many situations. What I am referring to as the "game engine" is the combination of the systems that the game must interact with for short. Also this is more of a theory / design question than an implementation. Even though both mix, i'd rather like to have a more general idea on how the editor is built in an efficient way and still using the same engine code as what the game uses. Thanks, Daniel P.S If you need more clarification or extra bits just leave a comment.

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • Personalisation of the Ubuntu interface

    - by Ben
    It's quite hard to phrase this question as the answer is very subjective and I don't know the right terminology to ask for what I want, but I will try my best. I love linux and would love to use it full-time as my main OS but the one think I have a problem with is the look of it. In my opinion it looks like it was designed for a child and I like my computer to look stylish rather than dated (this is opinion obviously). I like the look of OSX but there are certain things that I don't like, so no, I am not asking the age old question of "how do I make ubunutu look like OSX"...most of the attempts I have seen of this have been pretty poor when put up against the real thing so I just want to take certain things from it. Things I'd like to take from OSX: Spotlight (I don't like the Unity dashboard-esque thingy) Expose Spaces Dock (at the bottom) Icons (apart from the apple one) Look of file manager - its more pleasant to navigate around the file system. Closing an application window doesn't actually quit the program, so when you next launch it - it is instantaneous. Global menu (at the top) What are the latest Ubuntu alternatives to these? When it comes to actually changing the look of Ubuntu what should I be looking at? I know the following exists: Shell theme Icons Fonts ...but is there anything else I need to look into to actually change the look? I hear the term "Window Manager" thrown around, but I don't actually know what that is. What are good sources for reviews/links to the latest and greatest customisation techniques? Ubuntu now comes with Unity which I don't like very much. What are my alternatives? Should I look into Gnome3 or switch to classic desktop which is Gnome2 if I recall correctly? I hope I haven't put too much in one question and that it makes sense. Thanks.

    Read the article

  • When not to use Spring to instantiate a bean?

    - by Rishabh
    I am trying to understand what would be the correct usage of Spring. Not syntactically, but in term of its purpose. If one is using Spring, then should Spring code replace all bean instantiation code? When to use or when not to use Spring, to instantiate a bean? May be the following code sample will help in you understanding my dilemma: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = new ClassA(); ca.setName(name); caList.add(ca); } If I configure Spring it becomes something like: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = (ClassA)SomeContext.getBean(BeanLookupConstants.CLASS_A); ca.setName(name); caList.add(ca); } I personally think using Spring here is an unnecessary overhead, because The code the simpler to read/understand. It isn't really a good place for Dependency Injection as I am not expecting that there will be multiple/varied implementation of ClassA, that I would like freedom to replace using Spring configuration at a later point in time. Am I thinking correct? If not, where am I going wrong?

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • What&rsquo;s in your wallet, er&hellip;Inbox?

    - by johndoucette
    Since my first UUCP operation in UNIX to deliver and receive an email, I have always been challenged to find the ultimate email organizer. About a year ago, I switched to a very simple process of managing email and have found the ultimate in organization. On the craziest of days with 250+ emails, I keep my inbox empty. Here is how I do it; First, start with the following folders in your mailbox; Inbox    Archive    FollowUp    Hold Of course, all inbound emails will start in the Inbox. As you work throughout the day, follow these steps to keep your inbox empty; Read the email. Are you responsible for any action? If you are and can do it immediately, then do it. If you need to do it later, move the email to the “FollowUp” folder If you are not responsible for any action, move it to the archive folder. Use Outlook’s search to find them when you need them. If you will need to reference the email later in the week or for a short term (week or two), then move the email to the “Hold” folder As your day progresses, frequently review the FollowUp folder and accomplish the task *Notes: If I am waiting for someone to do something for me, I keep it in the FollowUp folder. As I review the folder, I am constantly reminded that there is something I am waiting on – and can send a simple reminder by forwarding the original email. I sometimes send myself a “todo” email and park it in the FollowUp folder I like to know how many emails are in the folders so I set the “Show total number of items” property on the folder to show the amount of emails.

    Read the article

  • Cannot see boot options after editing grub background

    - by cipricus
    After solving this problem I managed to get myself into truble again out of nothing by trying to change the display of the dual boot option page in Boot Customizer. I have changed the background, the fonts size (I have increased them) and font style (I have chosen UnDotum). But Boot Customizer gave me an error (I mean a message that the application was closed unexpectedly or smth). I have restarted BootCustomizer and the settings were there. When I rebooted, instead of the normal boot options list, just the background image that I had selected and nothing else. I used Boot Repair to repair grub, it says it did it successfully, but I still get the background image when I try to boot. Any ideas? (Could it be the matter that I chose UnDotum font style? That was installed in Lubuntu - but how could it be accessible in displaying boot options?) The contents of etc/default/grub are: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" I have tried to modify etc/default/grub: GRUB_HIDDEN_TIMEOUT=0 to 10 GRUB_HIDDEN_TIMEOUT_QUIET=true to false and GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to "" but it doesn't help Also, using Shift doesn't make the list visible. I am looking for something like a command that would reset grub options to default. [When trying to reinstall grub i get to this window in term:

    Read the article

  • Vernon's book Implementing DDD and modeling of underlying concepts

    - by EdvRusj
    Following questions all refer to examples presented in Implementing DDD In article we can see from Figure 6 that both BankingAccount and PayeeAccount represent the same underlying concept of Banking Account BA 1. On page 64 author gives an example of a publishing organization, where the life-cycle of a book goes through several stages ( proposing a book, editorial process, translation of the book ... ) and at each of those stages this book has a different definition. Each stage of the book is defined in a different Bounded Context, but do all these different definitions still represent the same underlying concept of a Book, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA? 2. a) I understand why User shouldn't exist in Collaboration Context ( CC ), but instead should be defined within Identity and Access Context IAC ( page 65 ). But still, do User ( IAC ), Moderator ( CC ), Author ( CC ),Owner ( CC ) and Participant ( CC ) all represent different aspects of the same underlying concept? b) If yes, then this means that CC contains several model elements ( Moderator, Author, Owner and Participant ), each representing different aspect of the same underlying concept ( just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA ). But isn't this considered a duplication of concepts ( Evan's book, page 339 ), since several model elements in CC represent the same underlying concept? c) If Moderator, Author ... don't represent the same underlying concept, then what underlying concept does each represent? 3. In an e-commerce system, the term Customer has multiple meanings ( page 49 ): When user is browsing the Catalog, Customer has different meaning than when user is placing an Order. But do these two different definitions of a Customer represent the same underlying concept, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA? thanks

    Read the article

  • bluetooth daemon not running at startup

    - by ffaxer
    I'm trying to connect a bluetooth mouse to my Xubuntu system using Blueman (v. 1.21) Problem seems to be bluetoothd not running at startup, so blueman refuses to start, only a dialog appears: "Bluez daemon is not running, blueman-manager cannot continue." On my system, bluetoothd will run only as root (sudo), so my current workaround is simply to sudo bluetoothd manually, which works fine but id like to have it run at startup so that my mouse is just working without any interaction from me, if possible. If i try to start bluetoothd as non-root it reports: Bluetooth deamon 4.91 Unable to get on D-Bus In the startup scripts i found the same bluetoothd script in all runlevels and init.d: DAEMON=/usr/sbin/bluetoothd test -f /usr/sbin/bluetoothd || exit 0 # bluetoothd normally starts up by udev rules. it needs dbus to function, log_progress_msg "bluetoothd" pkill -TERM bluetoothd || true log_progress_msg "bluetoothd" I looked in /etc/udev/rules.d/ but no reference to bluetoothd. Further i have already tried with no luck: Editing /etc/dbus-1/system.d/bluetooth.conf to include my user (essentially copying the part that was for root): I tried it while both keeping the root policy and without, still, no luck! Editing /etc/pam.d/common-session and /etc/pam.d/gdm to include the line: session optional pam_ck_connector.so In the case of common-session it was already there but with a "nox11" which i tried removing. No luck no luck. Btw, I'm confused as to which session manager I'm using, since i have both xfce4-session and gdm-session-worker running. Anyways, hope someone is savvy enough to figure it out or bring some hints, otherwise i sincerely apologize for wasting your time! I'll sign off with uname -a: Linux [mycompname] 3.0.0-9-lowlatency #12ppa1~natty1-Ubuntu SMP PREEMPT Mon Aug 22 06:52:15 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Peace B)

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >