Search Results

Search found 6619 results on 265 pages for 'digital logic'.

Page 189/265 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • iTorque for a simple arcade game

    - by Herfus
    I have a basic understanding of programming, but I am no programmer. I've had a couple of a semesters with java programming, so we're talking pretty basic here. I have some scripting experience with game editors where I've created a few (simple) encounters, boss AI, abilities, events and so on. I've mostly done level design with UDK, Source and several other toolsets for a few years now, but I'd like to divert some of the focus to iphone-development. I've participated in a few development projects (source, udk, daot) where I've had a variety of roles (yet never beyond simple scripting). I have just finished prototyping an Iphone game (using game maker) and begun a bit more precise planning on what I'll have to do for the real version. The game is fairly simple, perhaps the best comparison in scope and complexity would be Doodlejump for iPhone. The reason I created the prototype was not just to answer a few questions about the gameplay, but to get some insight into what kind of problems I might face when trying to develop the real thing. I've been looking for engines that I can use for this. iTorque looks, so far, like the best option with a scripting language and WYSIWYG-editor. However the price is fairly steep and I'd like to prepare myself as much as possible before jumping into this, which is why I'm going to ask a few questions here. What kind of difficulties do you think I might run into, considering what you've read so far? Not just with torque, but development in general. I'm making this question mostly to have someone to reality check me. I usually achieve to do what I'm trying to do with scripting, but something tells me there's a very big difference between scripting an AI or an event and creating game logic. Will it be too much of a leap? Just how simple is it to use the Torque scripting language? Obviously I don't expect to be prepared, I expect it to be a learning process. However, I'd still like to be at least a bit confident on the time I'll have to dedicate to this first.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Regular Expression Transformation

    The regular expression transformation exposes the power of regular expression matching within the pipeline. One or more columns can be selected, and for each column an individual expression can be applied. The way multiple columns are handled can be set on the options page. The AND option means all columns must match, whilst the OR option means only one column has to match. If rows pass their tests then rows are passed down the successful match output. Rows that fail are directed down the alternate output. This transformation is ideal for validating data through the use of regular expressions. You can enter any expression you like, or select a pre-configured expression within the editor. You can expand the list of pre-configured expressions yourself. These are stored in a Xml file, %ProgramFiles%\Microsoft SQL Server\nnn\DTS\PipelineComponents\RegExTransform.xml, where nnn represents the folder version, 90 for 2005, 100 for 2008 and 110 for 2012. If you want to use regular expressions to manipulate data, rather than just validating it, try the RegexClean Transformation. The component is provided as an MSI file, however for 2005/200 you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Regular Expression Transformation in the Choose Toolbox Items window. Downloads The Regular Expression Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Regular Expression Transformation for SQL Server 2005 Regular Expression Transformation for SQL Server 2008 Regular Expression Transformation for SQL Server 2012 Version History SQL Server 2012Version 2.0.0.87 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008Version 2.0.0.87 - Release for SQL Server 2008 Integration Services. (10 Oct 2008) SQL Server 2005 Version 1.1.0.93 - Added option for you to choose AND or OR logic when multiple columns have been selected. Previously behaviour was OR only. (31 Jul 2008) Version 1.0.0.76 - Installer update and improved exception handling. (28 Jan 2008) Version 1.0.0.41 - Update for user interface stability fixes. (2 Aug 2006) Version 1.0.0.24 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.9 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshots  

    Read the article

  • The Power to Control Power

    - by speakjava
    I'm currently working on a number of projects using embedded Java on the Raspberry Pi and Beagle Board.  These are nice and small, so don't take up much room on my desk as you can see in this picture. As you can also see I have power and network connections emerging from under my desk.  One of the (admittedly very minor) drawbacks of these systems is that they have no on/off switch.  Instead you insert or remove the power connector (USB for the RasPi, a barrel connector for the Beagle).  For the Beagle Board this can potentially be an issue; with the micro-SD card located right next to the connector it has been known for people to eject the card when trying to power off the board, which can be quite serious for the hardware. The alternative is obviously to leave the boards plugged in and then disconnect the power from the outlet.  Simple enough, but a picture of underneath my desk shows that this is not the ideal situation either. This made me think that it would be great if I could have some way of controlling a mains voltage outlet using a remote switch or, even better, from software via a USB connector.  A search revealed not much that fit my requirements, and anything that was close seemed very expensive.  Obviously the only way to solve this was to build my own.Here's my solution.  I decided my system would support both control mechanisms (remote physical switch and USB computer control) and be modular in its design for optimum flexibility.  I did a bit of searching and found a company in Hong Kong that were offering solid state relays for 99p plus shipping (£2.99, but still made the total price very reasonable).  These would handle up to 380V AC on the output side so more than capable of coping with the UK 240V supply.  The other great thing was that being solid state, the input would work with a range of 3-32V and required a very low current of 7.5mA at 12V.  For the USB control an Arduino board seemed the obvious low-cost and simple choice.  Given the current requirments of the relay, the Arduino would not require the additional power supply and could be powered just from the USB.Having secured the relays I popped down to Homebase for a couple of 13A sockets, RS for a box and an Arduino and Maplin for a toggle switch.  The circuit is pretty straightforward, as shown in the diagram (only one output is shown to make it as simple as possible).  Originally I used a 2 pole toggle switch to select the remote switch or USB control by switching the negative connections of the low voltage side.  Unfortunately, the resistance between the digital pins of the Arduino board was not high enough, so when using one of the remote switches it would turn on both of the outlets.  I changed to a 4 pole switch and isolated both positive and negative connections. IMPORTANT NOTE: If you want to follow my design, please be aware that it requires working with mains voltages.  If you are at all concerned with your ability to do this please consult a qualified electrician to help you.It was a tight fit, especially getting the Arduino in, but in the end it all worked.  The completed box is shown in the photos. The remote switch was pretty simple just requiring the squeezing of two rocker switches and a 9V battery into the small RS supplied box.  I repurposed a standard stereo cable with phono plugs to connect the switch box to the mains outlets.  I chopped off one set of plugs and wired it to the rocker switches.  The photo shows the RasPi and the Beagle board now controllable from the switch box on the desk. I've tested the Arduino side of things and this works fine.  Next I need to write some software to provide an interface for control of the outlets.  I'm thinking a JavaFX GUI would be in keeping with the total overkill style of this project.

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Android Swipe In Unity 3D World with AR

    - by Christian
    I am working on an AR application using Unity3D and the Vuforia SDK for Android. The way the application works is a when the designated image(a frame marker in our case) is recognized by the camera, a 3D island is rendered at that spot. Currently I am able to detect when/which objects are touched on the model by raycasting. I also am able to successfully detect a swipe using this code: if (Input.touchCount > 0) { Touch touch = Input.touches[0]; switch (touch.phase) { case TouchPhase.Began: couldBeSwipe = true; startPos = touch.position; startTime = Time.time; break; case TouchPhase.Moved: if (Mathf.Abs(touch.position.y - startPos.y) > comfortZoneY) { couldBeSwipe = false; } //track points here for raycast if it is swipe break; case TouchPhase.Stationary: couldBeSwipe = false; break; case TouchPhase.Ended: float swipeTime = Time.time - startTime; float swipeDist = (touch.position - startPos).magnitude; if (couldBeSwipe && (swipeTime < maxSwipeTime) && (swipeDist > minSwipeDist)) { // It's a swiiiiiiiiiiiipe! float swipeDirection = Mathf.Sign(touch.position.y - startPos.y); // Do something here in reaction to the swipe. swipeCounter.IncrementCounter(); } break; } touchInfo.SetTouchInfo (Time.time-startTime,(touch.position-startPos).magnitude,Mathf.Abs (touch.position.y-startPos.y)); } Thanks to andeeeee for the logic. But I want to have some interaction in the 3D world based on the swipe on the screen. I.E. If the user swipes over unoccluded enemies, they die. My first thought was to track all the points in the moved TouchPhase, and then if it is a swipe raycast into all those points and kill any enemy that is hit. Is there a better way to do this? What is the best approach? Thanks for the help!

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application

    - by Your DisplayName here!
    I recently had the task to find out how to mix ASP.NET Forms Authentication with WIF’s WS-Federation. The FormsAuth app did already exist, and a new sub-directory of this application should use ADFS for authentication. Minimum changes to the existing application code would be a plus ;) Since the application is using ASP.NET MVC this was quite easy to accomplish – WebForms would be a little harder, but still doable. I will discuss the MVC solution here. To solve this problem, I made the following changes to the standard MVC internet application template: Added WIF’s WSFederationAuthenticationModule and SessionAuthenticationModule to the modules section. Add a WIF configuration section to configure the trust with ADFS. Added a new authorization attribute. This attribute will go on controller that demand ADFS (or STS in general) authentication. The attribute logic is quite simple – it checks for authenticated users – and additionally that the authentication type is set to Federation. If that’s the case all is good, if not, the redirect to the STS will be triggered. public class RequireTokenAuthenticationAttribute : AuthorizeAttribute {     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         if (httpContext.User.Identity.IsAuthenticated &&             httpContext.User.Identity.AuthenticationType.Equals( WIF.AuthenticationTypes.Federation, StringComparison.OrdinalIgnoreCase))         {             return true;         }                     return false;     }     protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)     {                    // do the redirect to the STS         var message = FederatedAuthentication.WSFederationAuthenticationModule.CreateSignInRequest( "passive", filterContext.HttpContext.Request.RawUrl, false);         filterContext.Result = new RedirectResult(message.RequestUrl);     } } That’s it ;) If you want to know why this works (and a possible gotcha) – read my next post.

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • Why We Do What We Do. (Part 3 of 5 Part Series on JDE 5G Postponed)

    - by Kem Butller-Oracle
    By Lyle Ekdahl - Oracle JD Edwards Sr. VP General Manager  In the closing of part two of this 5 part blog series, I stated that in the next installment I would explore the expected results of the digital overdrive era and the impact it will have on our economy. While I have full intentions of writing on that topic, I am inspired today to write about something that is top of mind. It’s top of mind because it has come up several times recently conversations with my Oracle’s JD Edwards team members, with customers and our partners, plus I feel passionately about why I do what I do…. It is not what we do but why we do that thing that we do Do you know what you do? For the most part, I bet you could tell me what you do even if your work has changed over the years.  My real question is, “Do you get excited about what you do, and are you fulfilled? Does your work deliver a sense of purpose, a cause to work for, and something to believe in?”  Alright, I guess that was not a single question. So let me just ask, “Why?” Why are you here, right now? Why do you get up in the morning? Why do you go to work? Of course, I can’t answer those questions for you but I can share with you my POV.   For starters, there are several things that drive me. As many of you know by now, I have a somewhat competitive nature but it is not solely the thrill of winning that actually fuels me. Now don’t get me wrong, I do like winning occasionally. However winning is only a potential result of competing and is clearly not guaranteed. So why compete? Why compete in business, and particularly why in this Enterprise Software business?  Here’s why! I am fascinated by creative and building processes. It is about making or producing things, causing something to come into existence. With the right skill, imagination and determination, whether it’s art or invention; the result can deliver value and inspire. In both avocation and vocation I always gravitate towards the create/build processes.  I believe one of the skills necessary for the create/build process is not just the aptitude but also, and especially, the desire and attitude that drives one to gain a deeper understanding. The more I learn about our customers, the more I seek to understand what makes the successful and what difficult issues cause them to struggle. I like to look for the complex, non-commodity process problems where streamlined design and modern technology can provide an easy and simple solution. It is especially gratifying to see our customers use our software to increase their own ability to deliver value to the market. What an incredible network effect! I know many of you share this customer obsession as well as the create/build addiction focused on simple and elegant design. This is what I believe is at the root of our common culture.  Are JD Edwards customers on a whole different than other ERP solutions’ customers? I would argue that for the most part, yes, they are. They selected our software, and our software is different. Why? Because I believe that the create/build process will generally result in solutions that reflect who built it and their culture. And a culture of people focused on why they create/build will attract different customers than one that is based on what is built or how the solution is delivered. In the past I have referred to this idea as character of the customer, and it transcends industry, size and run rate. Now some would argue that JD Edwards has some customers who are characters. But that is for a different post. As I have told you before, the JD Edwards culture is unique, and its resulting economy is valuable and deserving of our best efforts. 

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • The Latest Major Release of AutoVue is Now Available!

    - by Pam Petropoulos
    Click here to read the full press release. To learn more about AutoVue 20.2, check out the What's New in AutoVue 20.2 Datasheet AutoVue 20.2 continues to set the standard for enterprise level visualization with Augmented Business Visualization, a new paradigm which reconciles information and business data from multiple sources into a single view, providing rich and actionable visual decision-making environments. The release also includes; capabilities that enhance end-to-end approval workflow; solutions to visually enable the mobile workforce; and support for the latest manufacturing and high tech formats.     New capabilities in release 20.2 include: ·         Enhancements to the Augmented Business Visualization framework o    Creation of 2D hotspots has been extended in 2D drawings, PDF and image files and can now be defined as regional boxes, rather than just text strings o    New 3D Hotspot links in models and drawings. Parts or components of 3D models can be selected to create hotspot links. ·         Enhanced end-to-end approval workflows with digital stamping and batch stamping improvements ·         Solutions that visually enable the mobile workforce and extend enterprise visualization to mobile devices, including iPads through OVDI (Oracle Virtual Desktop Infrastructure) ·         Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards ·         Timely support for new MCAD, ECAD, and Office formats ·         New 20.2 versions of AutoVue Document Print Services and Integration SDK (iSDK) ·         New Dutch language availability   The press release also contains terrific supporting quotes from AutoVue customers and partners.        “AutoVue’s stamping enhancements will greatly benefit our building permit management processes,” said Ties Kremer, Information Manager, Noordenveld Municipality, Netherlands. “The ability to batch stamp documents will speed up our approval processes, enable us to save time and money, and help us meet our regulatory compliance obligations.”          “AutoVue provides our non-technical teams in marketing and sales with access to customer order requirements and supporting CAD documents and drawings,” said James Lim, Regional Technical Systems Manager at Molex Incorporated. “AutoVue 20.2 has enabled us to refine our quotation process, and reduce order errors.”         “We are excited about our use of AutoVue’s Augmented Business Visualization framework, which will offer Meridian users enhanced access to related technical documentation,” said Edwin van Dijk, Director of Product Management, BlueCielo.  “By including AutoVue’s new regional hotspot capabilities within BlueCielo Meridian Enterprise, the context of engineering information is carried over into the visual representation of complex assets, thereby helping us to improve productivity and operational excellence.”    

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Who should control navigation in an MVVM application?

    - by SonOfPirate
    Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • How to execute a Ruby file in Java, capable of calling functions from the Java program and receiving primitive-type results?

    - by Omega
    I do not fully understand what am I asking (lol!), well, in the sense of if it is even possible, that is. If it isn't, sorry. Suppose I have a Java program. It has a Main and a JavaCalculator class. JavaCalculator has some basic functions like public int sum(int a,int b) { return a + b } Now suppose I have a ruby file. Called MyProgram.rb. MyProgram.rb may contain anything you could expect from a ruby program. Let us assume it contains the following: class RubyMain def initialize print "The sum of 5 with 3 is #{sum(5,3)}" end def sum(a,b) # <---------- Something will happen here end end rubyMain = RubyMain.new Good. Now then, you might already suspect what I want to do: I want to run my Java program I want it to execute the Ruby file MyProgram.rb When the Ruby program executes, it will create an instance of JavaCalculator, execute the sum function it has, get the value, and then print it. The ruby file has been executed successfully. The Java program closes. Note: The "create an instance of JavaCalculator" is not entirely necessary. I would be satisfied with just running a sum function from, say, the Main class. My question: is such possible? Can I run a Java program which internally executes a Ruby file which is capable of commanding the Java program to do certain things and get results? In the above example, the Ruby file asks the Java program to do a sum for it and give the result. This may sound ridiculous. I am new in this kind of thing (if it is possible, that is). WHY AM I ASKING THIS? I have a Java program, which is some kind of game engine. However, my target audience is a bunch of Ruby coders. I don't want to have them learn Java at all. So I figured that perhaps the Java program could simply offer the functionality (capacity to create windows, display sprites, play sounds...) and then, my audience can simply code with Ruby the logic, which basically justs asks my Java engine to do things like displaying sprites or playing sounds. That's when I though about asking this.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Cursor running wild, then crashes on an Asus G73sw

    - by Yarchmon
    The cursor sometimes goes wild, I get random clicks, the windows are resizing, the cursor disappears. In the worst case, clicks and keyboards are disabled. I've tried the solution given on doc.ubuntu-fr.org and add tu grub : i8042.nomux=1 i8042.reset=1 in GRUB_CMDLINE_LINUX_DEFAULT But it didn't work What can I do ? Graphic card : Geforce GTX460M. Ubuntu : 11.10 (64 bits). Laptop Asus G73sw Interface : Unity (since 11.10) - didn't get this problem with Gnome before. Complement: when a window is resizing, it gets drag-boxes at every corner, center of sides and center of the window. It looks like my touchpad sends random info, or like a "ghost" touchscreen. lspci result : 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation GF106 [GeForce GTX 460M] (rev a1) 01:00.1 Audio device: nVidia Corporation GF106 High Definition Audio Controller (rev a1) 03:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 USB Controller: Fresco Logic FL1000G USB 3.0 Host Controller (rev 04) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) Edit 01-09-12: Tried on Ubuntu 2D: the behavior is different: it's like i'm randomly clicking on the workspace switcher icon. In the worst case, it can happen several times in a minute.

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >