Search Results

Search found 16704 results on 669 pages for 'ews managed api'.

Page 547/669 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • June is going to be a busy month!

    - by Monica Kumar
    Who says things slow down in summer? Well, maybe for school kids, but certainly not for Oracle's Virtualization team! June is turning out to be one of the busiest months for us. We are going to be participating in a number of industry events. If you happen to be at any of these, please stop by the Oracle booth and our session/s. Let's go through a run down of these events. 1. 13th Annual Call Center Week June 4-8 Ceasar's Palace, Las Vegas  Event website You're now wondering...why are we at this call center show. It's really simple, Oracle's Desktop Virtualization solutions offer the best way for call center to reliably and securely access enterprise apps using a variety of endpoint devices such as an iPad or a Sun Ray Client. Provisioning new employees becomes a breeze. We'll be jointly showcasing our solution with Oracle's CRM team. Come check us out.  2. Gartner Infrastructure & Management, Florida June 5-7 Orlando, FL  Event website Oracle is a Premier sponsor of the Gartner IOM Summit this June 5 – 7, 2012 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 3. Cloud Expo East Check out our website for details of our participation. Stop by at booth 511 to talk to our Cloud, Virtualization and Big Data experts. In addition, we're delivering a number of sessions at Cloud Expo. The one I want to highlight is the following: Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder Abstract: As virtualization adoption progresses beyond server consolidation, this is also transforming how enterprise applications are deployed and managed in an agile environment. The traditional method of business-critical application deployment where administrators have to contend with an array of unrelated tools, custom scripts to deploy and manage applications, OS and VM instances into a fast changing cloud computing environment can no longer scale effectively to achieve response time and desired efficiency. Oracle VM and Oracle Virtual Assembly Builder allow applications, associated components, deployment metadata, management policies and best practices to be encapsulated into ready-to-run VMs for rapid, repeatable deployment and ease of management. Join us in this Cloud Expo session to see how Oracle VM and Oracle Virtual Assembly Builder allow you to deploy complex multi-tier applications in minutes and enables you to easily onboard existing applications to cloud environments.  Get your free Cloud Expo pass now!  We're offering complimentary VIP Gold Passes. Go to https://www.blueskyz.com/v3/Login.aspx?ClientID=19&EventID=56&sg=177, click “Continue” if you are a New User or log-in if you have already created an account. Once there, you can view the Agenda or Register for Cloud Expo. To register - fill out the basic business card questions and then enter oracleVIPgold in the Priority Code field to change the price from $2,000 to $0. 4. CiscoLive 2012  June 10-14 San Diego, CA Event website Our Oracle VM and Oracle Linux experts will talk about joint collaboration with Cisco on UCS. We'll also highlight customer use cases. 5. Gartner Infrastructure & Operations Management Summit, EMEA Dates: June 11-12 Frankfurt, Germany Event website Meet experts from our Virtualization and Linux team in EMEA. Stop by our booth and find out what's new in Oracle VM Server for x86 and Oracle Linux. June is going to be busy.

    Read the article

  • Maven Integrated View for NetBeans IDE

    - by Geertjan
    Started working on an oft-heard request from Kirk Pepperdine for an integrated view for multimodule builds for Maven projects in NetBeans IDE, as explained here. I suddenly had some kind of brainwave and solved all the remaining problems I had, by delegating to the LogicalViewProvider's node, instead of the project's node, which means I inherit all the icons, actions, package nodes, and anything else that was originally defined within the original project, in this case for the open source JAnnocessor project: Above, you can see that the Maven submodules can either be edited in-line, i.e., within the parent project, or separately, by opening them in the traditional NetBeans way. Get the module here: http://plugins.netbeans.org/plugin/45180/?show=true Some people out there might be interested in how this is achieved. First, hide the original ModulesNodeFactory in the layer. Then create the following class, which creates what you see in the screenshot above: import java.util.ArrayList; import java.util.List; import javax.swing.event.ChangeListener; import org.netbeans.api.project.Project; import org.netbeans.spi.project.SubprojectProvider; import org.netbeans.spi.project.ui.LogicalViewProvider; import org.netbeans.spi.project.ui.support.NodeFactory; import org.netbeans.spi.project.ui.support.NodeList; import org.openide.nodes.FilterNode; import org.openide.nodes.Node; @NodeFactory.Registration(projectType = "org-netbeans-modules-maven", position = 400) public class ModulesNodeFactory2 implements NodeFactory { @Override public NodeList<?> createNodes(Project prjct) { return new MavenModulesNodeList(prjct); } private class MavenModulesNodeList implements NodeList<Project> { private final Project project; public MavenModulesNodeList(Project prjct) { this.project = prjct; } @Override public List<Project> keys() { return new ArrayList<Project>( project.getLookup(). lookup(SubprojectProvider.class).getSubprojects()); } @Override public Node node(final Project project) { Node node = project.getLookup().lookup(LogicalViewProvider.class).createLogicalView(); return new FilterNode(node, new FilterNode.Children(node)); } @Override public void addChangeListener(ChangeListener cl) { } @Override public void removeChangeListener(ChangeListener cl) { } @Override public void addNotify() { } @Override public void removeNotify() { } } } Considering that there's only about 5 actual statements above, it's pretty amazing how much can be achieved with so little code. The NetBeans APIs really are very cool. Hope you like it, Kirk!

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • ALSA samples capture: cannot open device

    - by Randagio
    I'm quite new to Linux (Lubuntu 12.04 for sake of precision) and ALSA programming at all. I'm trying to write a C program to capture audio from internal PC microphone for processing it. So as first step I google a bit and I found this article for capturing audio samples A tutorial on using the ALSA Audio API but when I compile it and execute it with: ./capture "default" or ./capture "hw:0,0" and all the possible variants on theme it always raises the error: cannot open device hw:0,0 (no such file or directory). So the issue is: what is the name of the mic audio device to pass as parameter to record the audio from mic ? The mic is working ok because the Sound Recorder program records sounds perfectly and I can playback them. The output of the aplay -l is the following : **** List of PLAYBACK Hardware Devices **** card 0: I82801DBICH4 [Intel 82801DB-ICH4], device 0: Intel ICH [Intel 82801DB-ICH4] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: I82801DBICH4 [Intel 82801DB-ICH4], device 4: Intel ICH - IEC958 [Intel 82801DB-ICH4 - IEC958] Subdevices: 1/1 Subdevice #0: subdevice #0 and this is the amixer output (cut) Simple mixer control 'Master',0 Capabilities: pvolume pswitch penum Playback channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Front Left: Playback 31 [100%] [0.00dB] [on] Front Right: Playback 31 [100%] [0.00dB] [on] Simple mixer control 'Master Mono',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined penum Playback channels: Mono Limits: Playback 0 - 31 Mono: Playback 4 [13%] [-40.50dB] [on] Simple mixer control 'PCM',0 Capabilities: pvolume pswitch penum Playback channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Front Left: Playback 31 [100%] [12.00dB] [on] Front Right: Playback 31 [100%] [12.00dB] [on] Simple mixer control 'CD',0 Capabilities: pvolume pswitch cswitch cswitch-exclusive penum Capture exclusive group: 0 Playback channels: Front Left - Front Right Capture channels: Front Left - Front Right Limits: Playback 0 - 31 Front Left: Playback 0 [0%] [-34.50dB] [off] Capture [off] Front Right: Playback 0 [0%] [-34.50dB] [off] Capture [off] Simple mixer control 'Mic',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined cswitch cswitch-exclusive penum Capture exclusive group: 0 Playback channels: Mono Capture channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Playback 22 [71%] [-1.50dB] [on] Front Left: Capture [on] Front Right: Capture [on] Simple mixer control 'Mic Boost (+20dB)',0 Capabilities: pswitch pswitch-joined penum Playback channels: Mono Mono: Playback [off] Simple mixer control 'Mic Select',0 Capabilities: enum Items: 'Mic1' 'Mic2' Item0: 'Mic1' Simple mixer control 'Stereo Mic',0 Capabilities: pswitch pswitch-joined penum Playback channels: Mono Mono: Playback [off] so for aplay it seems I have no recording device, but for amixer I've got the mic, a mic boost and mic stereo as well with all those gorgeous stuffs on their place !!. If so, how could my Sound Recorder record the audio without any problem at all ?!?! For sure I'm giving the wrong device name to the command line for capturing audio but I'm loosing the hope for finding the correct one ! Please help....before I tear my hair out !!!

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Book Review: Oracle ADF 11gR2 Development Beginner's Guide

    - by Grant Ronald
    Packt Publishing asked me to review Oracle ADF 11gR2 Development Beginner's Guide by Vinod Krishnan, so on a couple of long flights I managed to get through the book in a couple of sittings. One point to make clear before I go into the review.  Having authored "The Quick Start Guide to Fusion Development: JDeveloper and Oracle ADF", I've written a book which covers the same topic/beginner level.  I also think that its worth stating up front that I applaud anyone who has gone  through the effort of writing a technical book. So well done Vinod.  But on to the review: The book itself is a good break down of topic areas.  Vinod starts with a quick tour around the IDE, which is an important step given all the work you do will be through the IDE.  The book then goes through the general path that I tend to always teach: a quick overview demo, ADF BC, validation, binding, UI, task flows and then the various "add on" topics like security, MDS and advanced topics.  So it covers the right topics in, IMO, the right order.  I also think the writing style flows nicely as well - Its a relatively easy book to read, it doesn't get too formal and the "Have a go hero" hands on sections will be useful for many. That said, I did pick out a number of styles/themes to the writing that I found went against the idea of a beginners guide.  For example, in writing my book, I tried to carefully avoid talking about topics not yet covered or not yet relevant at that point in someone's learning.  So, if I was a new ADF developer reading this book, did I really need to know about ADFBindingFilter and DataBindings.cpx file on page 58 - I've only just learned how to do a drag and drop simple application so showing me XML configuration files relevant to JSF/ADF lifecycle is probably going to scare me off! I found this in a couple of places, for example, the security chapter starts on page 219 but by page 222 (and most of the preceding pages are hands-on steps) we're diving into the web.xml, weblogic.xml, adf-config.xml, jsp-config.xml and jazn-data.xml.  Don't get me wrong, I'm not saying you shouldn't know this, but I feel you have to get people on a strong grounding of the concepts before showing them implementation files.  If having just learned what ADF Security is will "The initialization parameter remove.anonymous.role is set to false for the JpsFilter filter as this filter is the first filter defined in the file" really going to help me? The other theme I found which I felt didn't work was that a couple of the chapters descended into a reference guide.  For example page 159 onwards basically lists UI components and their properties.  And page 87 onwards list the attributes of ADF BC in pretty much the same way as the on line help or developer guide, and I've a personal aversion to any sort of help that says pretty much what the attribute name is e.g. "Precision Rule: this option is used to set a strict precision rule", or "Property Set: this is the property set that has to be applied to the attribute". Hmmm, I think I could have worked that out myself, what I would want to know in a beginners guide are what are these for, what might I use them for...and if I don't need to use them to create an emp/dept example them maybe it’s better to leave them out. All that said, would the book help me - yes it would.  It’s obvious that Vinod knows ADF and his style is relatively easy going and the book covers all that it has to, but I think the book could have done a better job in the educational side of guiding beginners.

    Read the article

  • Is It Worth It To Learn Experimental Languages

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • NetworkManager Applet shows no networks

    - by Kkelk
    I am "the friend" referred to in the questions here and here. I decided to come and ask a question myself, as I can still not connect to the wireless network. I downloaded Keryx, as suggested here, and managed to download the necessary package and its dependencies. When I attempted to install the packages on Ubuntu using Keryx, Keryx just closed. Following this, I installed the packages manually using dpkg, and as far as I can tell, this was successful: kieran@ubuntu:~$ cd /host/wifi/Keryx/keryx/projects/Kieran/packages kieran@ubuntu:/host/wifi/Keryx/keryx/projects/Kieran/packages$ sudo dpkg -i *.deb [sudo] password for kieran: Selecting previously deselected package bcmwl-kernel-source. (Reading database ... 118296 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_i386.deb) ... Selecting previously deselected package dkms. Unpacking dkms (from dkms_2.1.1.2-3ubuntu1.1_all.deb) ... Selecting previously deselected package fakeroot. Unpacking fakeroot (from fakeroot_1.14.4-1ubuntu1_i386.deb) ... Selecting previously deselected package linux-image. Unpacking linux-image (from linux-image_2.6.35.22.23_i386.deb) ... Selecting previously deselected package menu. Unpacking menu (from menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package patch. Unpacking patch (from patch_2.6-2ubuntu1_i386.deb) ... Setting up fakeroot (1.14.4-1ubuntu1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up linux-image (2.6.35.22.23) ... Setting up menu (2.1.44ubuntu1) ... Setting up patch (2.6-2ubuntu1) ... Processing triggers for man-db ... Setting up dkms (2.1.1.2-3ubuntu1.1) ... Setting up bcmwl-kernel-source (5.60.48.36+bdcom-0ubuntu5) ... Loading new bcmwl-5.60.48.36+bdcom DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i686 Building initial module for 2.6.35-22-generic Done. wl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/2.6.35-22-generic/updates/dkms/ depmod..... DKMS: install Completed. update-initramfs: deferring update (trigger activated) Processing triggers for install-info ... Processing triggers for doc-base ... Processing 31 changed 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for menu ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.35-22-generic Warning: No support for locale: en_GB.utf8 After rebooting, however, there were still no wireless networks in the NetworkManager Applet list. I opened the file /var/lib/NetworkManager/NetworkManager.state, and both NetworkEnabled and WirelessEnabled were set to True. While i'm very concious I may be asking a stupid question here, both my friend and I have nothing left to suggest, and as such - I would be very grateful for any answers as to how to get wireless working.

    Read the article

  • Developer Day @ OOP 2001with SOA Specialized Partners

    - by Jürgen Kress
    Oracle SOA Specialized Partners like Opitz Consulting participate in our key marketing events. Therefore make sure that you start your journey to SOA Specialization! ORACLE Developer Day auf der OOP: Entdecken Sie die Einsatzmöglichkeiten und Leistungsfähigkeit der Java-Technologie! incl. Live Hacking mit Special Guest: JAVA Guru Adam Bien! Enterprise-Anwendungen leicht gemacht! Beschleunigen Sie Ihre Entwicklung mit Java. Kommen Sie zum kostenlosen Ganztages-Workshop von ORACLE auf der OOP und lernen Sie die Leistungsfähigkeit von Java kennen. Erfahren Sie mehr über die Java Strategie und die Produktroadmap, welche Einsatzmöglichkeiten Java SE für Embedded erschließt und wie sich eine SOA und BPM-Lösung auf der Basis von Java realisieren lässt. Die vielfältigen Verbesserungen von Java EE6 erleichtern den Entwicklern das Leben erheblich. Kennen Sie bereits das Potential von Java EE6? Adam Bien wird Sie mit einem Live-Hacking von den Stühlen reißen. Torsten Winterberg, Oracle Fusion Middleware ACE Director und Danilo Schmiedel stellen vor wie Java Entwickler die Oracle SOA & BPM Lösungen einbinden können. Am Nachmittag können Sie dann in einer Hands-On Session mit Ihrem eigenen Laptop Java Persistence API, Java Beans, CDI und weitere Technologien ausprobieren. In diesem kostenlosen Workshop von Oracle können Sie sich mit Gleichgesinnten austauschen, sich die neueste Technik direkt von den Oracle Experten zeigen lassen und an praktischen Programmierübungen teilnehmen. Auf dieser Veranstaltung sind Sie richtig, wenn Sie mehr über den aktuellen Status der Java Roadmap wissen wollen, mehr über Java Technologie- und Lösungen (Java SE, ME, etc) erfahren wollen, die Plattform Java EE erproben, die Vorteile der Java EE 6 für Ihre Arbeit verstehen möchten, wenn Sie auf eine Enterprise-Landschaft hochskalieren wollen, mit Java Server Faces Front-Ends erstellen, neue Entwicklungsprojekte planen oder gerade in Angriff annehmen. Registrieren Sie sich jetzt!   ICM - Internationales Congress Center München Am Messesee, Trudering-Riem 81829 München 27. Januar 2011 9.00 Uhr - 16.30 Uhr For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: OOP,Adam Bien,Torsten Winterberg,Opitz Consulting,Oracle,SOA,SOA Specialization,OPN

    Read the article

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • Exit Infragistics, Enter Telerik

    - by Anthony Trudeau
    Today I made the purchase of the Premium Collection of components from Telerik.  This follows an evaluation I’ve been doing to replace the Infragistics components we currently use for Windows Forms, ASP.NET MVC, and WPF. It was not a formal evaluation.  I had already decided to move the company away from Infragistics.  That decision was mostly born out of frustration with support over using the Infragistics components in my first production MVC application. One such issue was a simple scenario where you have a model that has a scalar property that can be one value out of a list.  The built-in combobox does this, but I was told by Infragistics support that they didn’t support it – and it took them several emails and days of waiting between responses to determine that.  I implemented this in Telerik in a minute not including the several minutes it took me to get a rudimentary understanding for the component and its API. Here’s the code using the built-in combobox:@Html.DropDownListFor(x => x.VendorId, new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId), "Select Id") Here’s the code using the Telerik combobox:@(Html.Telerik().ComboBoxFor(model => model.VendorId) .AutoFill(true) .BindTo(new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId)) )   I chose Telerik over other competitors based on the professional appearance of their website, and how easy it was to find information.  I’d like to say I had time to evaluate other Infragistics competitors.  Due to time constraints I had to make an initial decision based on superficial, but still important things. I picked Telerik with the plan to only look further at other companies if my evaluation didn’t meet my expectations.  Luckily they did, because I didn’t relish the thought of carving out more time to evaluate another set of components. Overall my experience with Telerik has been superior to Infragistics in every way.  The installation was easy using their control panel installer application.  Getting up to speed has been easy.  And the communication from Telerik has met my expectations.  And we’ll continue to be good as long as I don’t start getting email messages from a sales rep saying that they want to talk to me about training and consulting – I’m looking at you Infragistics.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Oracle Cloud Application Foundation 12c Helps Customers Deliver Next-Generation Applications on a Mission-Critical Cloud Platform In a recent online event, Oracle and industry speakers introduced Oracle Cloud Application Foundation 12c, including Oracle WebLogic 12.1.2 and Oracle Coherence 12.1.2.  Read More Team Spotlight: Mike Lehmann, Vice President of Product Management Meet the team behind Oracle Fusion Middleware. In this edition, we speak to Mike Lehmann, Oracle’s vice president of product management for Oracle Cloud Application Foundation, Oracle WebLogic Server, Oracle Coherence, Java Cloud Services, and Java Platform, Enterprise Edition. Read More New and Free: Learn Oracle Application Development Framework Mobile Online at Your Convenience Are you ready to go mobile? Check out this new tutorial from Oracle’s ADF Academy - Developing Applications with Oracle Application Development Framework Mobile. New: Oracle JDeveloper 12c and Oracle Application Development Framework 12c Announcing Oracle JDeveloper 12c and Oracle Application Development Framework 12c. New capabilities include HTML5, better Maven support, Git support, new Oracle ADF Faces components, improved REST support, Enterprise JavaBeans/Java Persistence API, and the latest support for Oracle WebLogic Server 12.1.2. Get more details and download. New: Oracle Enterprise Pack for Eclipse 12c The best Eclipse-based tools for Oracle WebLogic and Oracle Coherence continue to get better. Check out the latest Oracle WebLogic and Oracle Coherence support, improved Oracle Application Development Framework support, Maven, and more. Register: Oracle WebLogic Devcast Series Join us for the upcoming Oracle WebLogic Devcast webcast. Oracle GlassFish Server 3.1.2 and 2.1.1 updates  & An Overview of JSON-P & Comprehensive Free Java EE 6 Video Tutorial! Java ME Embedded 3.3 and Java ME Software Development Kit (SDK) 3.3 Now Available - Optimized for microcontrollers and other resource-constrained devices, this release reduces "core plumbing" for an app, and includes more information about memory and network usage critical for low-power apps. JDK 8 Early Access Releases now available JDK 8 Early Access Developer Documentation - Get the latest documentation changes to the Java Developer Guides and the Java Tutorials - Blog NetBeans IDE 7.4 Beta - This release extends HTML5 features to Java EE and PHP application development, introduces new support for Hybrid HTML5 development on Android and iOS platforms, and preview support for JDK 8. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Music Notation Editor - Refactoring view creation logic elseware

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • How to recover after embarrassing yourself and your company?

    - by gaearon
    I work in an outsourcing company in Russia, and one of our clients is a financial company located in USA. For the last six months I have been working on several projects for this particular company, and as I was being assigned a larger project, I was invited to work onsite in USA in order to understand and learn the new system. Things didn't work out as well as I hoped because the environment was messy after original developers, and I had to spent quite some time to understand the quirks. However we managed to do the release several days ago, and it looks like everything's going pretty smooth. From technical perspective, my client seems to be happy with me. My solutions seem to work, and I always try to add some spark of creativity to what I do. However I'm very disorganized in a certain sense, as I believe many of you fellas are. Let me note that my current job is my first job ever, and I was lucky enough to get a job with flexible schedule, meaning I can come in and out of the office whenever I want as long as I have 40 hours a week filled. Sometimes I want to hang out with friends in the evening, and days after that I like to have a good sleep in the morning—this is why flexible schedule (or lack of one) is ideal fit for me. [I just realized this paragraph looks too serious, I should've decorated it with some UNICORNS!] Of course, after coming to the USA, things changed. This is not some software company with special treatment for the nerdy ones. Here you have to get up at 7:30 AM to get to the office by 9 AM and then sit through till 5 PM. Personally, I hate waking up in the morning, not to say my productivity begins to climb no sooner than at 5 o'clock, i.e. I'm very slow until I have to go, which is ironic. Sometimes I even stay for more than 8 hours just to finish my current stuff without interruptions. Anyway, I could deal with that. After all, they are paying for my trip, who am I to complain? They need me to be in their working hours to be able to discuss stuff. It makes perfect sense that fixed schedule doesn't make any sense for me. But it does makes sense that it does make sense for my client. And I am here for client, therefore sense is transferred. Awww, you got it. I was asked several times to come exactly at 9 AM but out of laziness and arrogance I didn't take these requests seriously enough. This paid off in the end—on my last day I woke up 10 minutes before final status meeting with business owner, having overslept previous day as well. Of course this made several people mad, including my client, as I ignored his direct request to come in time for two days in the row, including my final day. Of course, I didn't do it deliberately but certainly I could've ensured that I have at least two alarms to wake me up, et cetera...I didn't do that. He also emailed my boss, calling my behavior ridiculous and embarrassing for my company and saying “he's not happy with my professionalism at all”. My boss told me that “the system must work both in and out” and suggested me to stay till late night this day working in a berserker mode, fixing as many issues as possible, and sending a status email to my client. So I did, but I didn't receive the response yet. These are my questions to the great programmers community: Did you have situations where your ignorance and personal non-technical faults created problems for your company? Were you able to make up for your fault and stay in a good relationship with your client or boss? How? How would you act if you were in my situation?

    Read the article

  • TechCast Live: "Java and Oracle, One Year Later" Replay Now Available

    - by Justin Kestelyn
    Earlier this week I had the opportunity to chat with Ajay Patel, Oracle's VP leading the Java Evangelist team, about "the state of the union" wrt Oracle and Java. Take a look: And here are some choice quotes, some paraphrased, as helpfully transcribed by Java evangelist Terrence Barr: "One key thing we have learned ... Java is not just a platform, it is also an ecosystem, and you can't have an ecosystem without a community." "The objectives, strategically [for Java at Oracle] have been pretty clear: How do we drive adoption, how do we build a larger, stronger developer community, how do we really make the platform much more competitive." "It's about transparency, involvement. IBM, RedHat, Apple have all agreed to working with us to make OpenJDK the best platform for open source development ... it is a sign that the community has been waiting to move the Java platform forward." "It's not just about Oracle anymore, it's about Java, the technology, the community, the developer base, and how we work with them to move the innovation forward." "Java is strategic to Oracle, and the community is strategic for Java to be successful ... it is critical to our business." On JavaFX 2.0: "... is coming to beta soon, with a release planned in second half [of 2011] ... will give you a new, high-performance graphics engine, the new API for JavaFX ... you will see a very strong, relevant platform for levering rich media platforms." On the JDK and SE: "... aggressively moving forward, JDK 7 is now code complete ... looking good for getting JDK 7 out by summer as we promised. Started work on JDK 8, Jigsaw and Lambda are moving along nicely, on track for JDK 8 release next year ... good progress." On Java EE and Glassfish: "... Very excited to have Glassfish 3.1 released, with clustering and management capabilities ... working with the JCP to shortly submit a number of JSRs for Java EE 7 ... You'll see Java EE 7 becoming the platform for cloud-based development." "You will see Oracle continue to step up to this role of Java steward, making sure that the language, the technology, the platform ... is competitive, relevant, and widely adopted." Making progress!

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >