Search Results

Search found 1458 results on 59 pages for 'cinnamon challenge'.

Page 30/59 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Where does jQuery fit-in with frameworks like JavaScriptMVC, BackboneJS, SproutCore and Knockout?

    - by Prisoner ZERO
    I have been happily using JQuery for the last 2 years and have been quite sucessful creating some really cool functionality with it...so I am very comfortable with it. I also beleive the future of the web will continue on the current client-side path. However... The next challenge seems to be coming in the form of various controller frameworks: KnockoutJS, BackboneJS, SproutCore, JavaScriptMVC (the list goes on). Additonally, there are some great AMD Loader tools for use like RequireJS or LabJS etc. However, jQuery now has define and then capabilities baked-in. It's getting harder-and-harder to keep track of it all... And now, my task seems to be to evaluate/decide-on a strategic-direction for using some form of either an MVC or MVVM framework client-side...but I have so many questions. Where does JQuery fit-in with the various controller-frameworks mentioned above? Is JQuery used alongside each or do some of them have their own 'JQuery-styled version' baked-in? Are tools like RequireJS still needed if you implement one of the various controller-frameworks mentioned above? Does the define and then capabilities baked-into JQuery now supercede the AMD Loader mentioned above? Which one seems most modular? (see notes below) NOTES: One thing I don't want in any future-framework is the requirement of having to take-in vast amounts of functionality that I don't use. Meaning, I would rather use a framework that is truly modular. For example, to use jQuery UI you have to take-in a lot other core libraries that you might not actually use. I will be experimenting with each one, but some REAL feedback would be great. I've seen some 'similar' questions, but none have really answered the above skew. Thanks in advance!

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • Oracle Tutor: Document Audit and Maintenance

    - by Emily Chorba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Perhaps the most critical phase in the process of documenting policies and procedure -- and the greatest challenge to owners -- is the maintenance of published documents. Documents must reflect current practice and they must be accurate. The most effective way to ensure this is through the regular audit of documents. In the Tutor environment, a Document Owner must audit each of his/her documents once every 6 to 12 months to verify that the document reflects actual practice. If it does not, the document is updated or employees are retrained (depending on the nature of the discrepancy). If a document update is required, the Tutor system enables the owner to modify and redistribute the document within one work day. This is possible because: Documents contain a minimum of detail, thereby reducing the edits. Document format and structure are simple, so changes are easy to identify The Tutor Author software tool enables the Document Owner or the Document Administrator to update the file quickly. The Document Administrator verifies the document format and integration, publishes the document, and distributes it to all affected employees, thereby freeing the Document Owner of the more tedious tasks. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & UPK

    Read the article

  • Venez nous voir au Forum Oracle Big Data le 5 avril !

    - by Kinoa
    Le Big Data vient de plus en plus souvent au devant de la scène et vous souhaitez en apprendre davantage ? Générés à partir des réseaux sociaux, de capteurs numériques et autres équipements mobiles, les Big Data - autrement dits, d'énormes volumes de données - constituent une mine d'informations précieuses sur vos activités et les comportements de vos clients. Votre challenge aujourd’hui consiste à gérer l’acquisition, l’organisation et la compréhension de ces volumes de données non structurées, et à les intégrer dans votre système d’information. Vous avez des questions ? Ca vous parait complexe ? Alors le Forum Oracle Bid Data organisé par Oracle et Intel est fait pour vous !   Nous aborderons plusieurs points : Accélération du déploiement de Big Data par l'approche intégrée du hardware et du software Mise à disposition de tous les outils nécessaires au processus complet, de l'acquisition des données à la restitution Intégration de Big Data dans votre système d'information pour fournir aux utilisateurs la quintessence de l'information Nous vous avons concocté un programme des plus alléchant pour cette journée du 5 avril : 9h00 Accueil et remise des badges 9h30 Big Data : The Industry View. Are you ready ?Johan Hendrickx, Core Technology Director, Oracle EMEA Keynote : Big Data – Are you ready ? George Lumpkin, Vice President of DW Product Management, Oracle Corporation Acquisition des données dans votre Big Dataavec Hadoop et Oracle NoSQL Pause Organisez et structurez l'information au sein de votre Big Data avec Big Data Connectors et Oracle Data Integrator Tirez parti des analyses des données de votre Big Dataavec Oracle Endeca et Oracle Business Intelligence 13h00 Cocktail déjeunatoire Le nombre de places est limité, pensez à vous inscrire dès maintenant. Lieu :  Maison de la Chimie28 B, rue Saint Dominique 75007 Paris

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • What technology or skillset should I learn today in order to be able to charge $250+ / hr in 2-3 years? [closed]

    - by Ryan Waggoner
    I've been doing PHP freelance development for the last 4-5 years and I'm starting to max out my hourly rate. So in 2010 I decided to transition to a new language. I played with Python and Ruby, but ended up settling on iOS, for three reasons: I'm enjoying the challenge of working on a completely different type of development, instead of another flavor of web development The demand seems higher right now than for Ruby or Python I see iOS developers charging $150 - 250 / hr Whether these reasons are right or wrong, I've been learning iOS for the last year and I'm starting to get more work in that field. I feel confident that in six months (barring any major shifts in the ecosystem), I can be billing iOS work at $150 / hr or more. However, I'm feeling that I should have done this earlier, that I've missed the boat, and that iOS development is going to dry up or get much more commoditized. Whether this is true or not isn't really my question (though feel free to comment). What I want to know is: what should I start learning right now so that I can be ahead of the curve in a couple years when the demand is far outstripping supply? What technologies or skillsets are going to be so heavily in demand in 2-3 years that you'll be able to charge $250 / hr or more and stay busy? These don't have to be new technologies either...the answer could be iOS or COBOL or whatever.

    Read the article

  • KISS principle applied to programming language design?

    - by Giorgio
    KISS ("keep it simple stupid", see e.g. here) is an important principle in software development, even though it apparently originated in engineering. Citing from the wikipedia article: The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the 'stupid' refers to the relationship between the way things break and the sophistication available to fix them. If I wanted to apply this to the field of software development I would replace "jet aircraft" with "piece of software", "average mechanic" with "average developer" and "under combat conditions" with "under the expected software development / maintenance conditions" (deadlines, time constraints, meetings / interruptions, available tools, and so on). So it is a commonly accepted idea that one should try to keep a piece of software simple stupid so that it easy to work on it later. But can the KISS principle be applied also to programming language design? Do you know of any programming languages that have been designed specifically with this principle in mind, i.e. to "allow an average programmer under average working conditions to write and maintain as much code as possible with the least cognitive effort"? If you cite any specific language it would be great if you could add a link to some document in which this intent is clearly expressed by the language designers. In any case, I would be interested to learn about the designers' (documented) intentions rather than your personal opinion about a particular programming language.

    Read the article

  • Does an inexperienced programmer need an IDE?

    - by Torben Gundtofte-Bruun
    Reading this other question makes me wonder if I (as an absolute beginner PHP programmer) should stick with WAMP and Notepad++ or to switch to some IDE like Eclipse. It's understandable that skilled developers will benefit from a big shiny IDE. But why should an absolute beginner use an IDE? Do the benefits outweigh the extra challenge of learning the IDE on top of learning to develop? Update for clarification: My goal is to get some basic programming experience. By choosing PHP and WAMP (and FogBugz and Kiln) I hope to avoid having to navigate the tricky / messy OS specifics and compiling etc. and just focus on basic functionality like an online user registration form. I've got lots of theoretical understanding from university a decade ago but no practical experience. I want to remedy that with a hobby project that would be similar to a real-world sellable web app. There are so many questions to ask. So many pitfalls I probably have to blunder into. This question is just one piece (my first!) of that puzzle.

    Read the article

  • Please help me decide if I should I change jobs [closed]

    - by KindaNewbie
    About me: I am very entrepreneurial and believe I would do well working solo as a consultant and possibly hiring help. I do want to do that at some point. I love to learn and a good challenge. Please help me make this decision! Current job (I am there for about 4 years): Pros: secure job good pay (I guess I am 80 percentile for my level/geographical area) large corporation - main business is not software excellent health insurance for low cost to me, pension, 401k matching, 6 weeks paid time off per year small dev team use of latest technologies (mostly WPF/silverlight) low supervision (I can do personal things all the time) I get to do a lot of moonlighting and my goal was to go solo full-time in a year or so. Cons: small team of non-professional devs 50% of my time I do things I don't enjoy projects are not meaningful to the organization If I left it wouldn't be too hard for them - business would resume as usual. Nobody besides my small team of 3 has any idea about software development whatsoever. Prospect job: Pros: small/agile software company same salary as current job same size dev team but all are very sharp (I would probably be the weakest of the team in the beginning) technology used is outside my comfort zone (latest cool web technolgies such as html5/jquery/...) - I am not a web dev and they know that. ton of learning opportunity Start-up - possibility of stock option/partial ownership of some sort Cons: Small office space - not able to do personal things as often (may be pro) No room for moonlighting less benefits (but salary can compensate for that)

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • What Should I Do? [closed]

    - by Laxmidi
    What is a reasonable goal in terms of traffic for my Flex 3 site: www.brainpinata.com Since I began a couple of months ago, I've gotten roughly 5500 ad views and 280 ad clicks. And the ad revenue is a whopping $4.80. (I don't use Google Adsense). I advertise my site using Google Adwords to try to build traffic. My budget is $10/day. What should I do? a) Push the marketing. Add a blog. Try to get backlinks, contact blogs, start a Facebook page, tweet, etc. b) Google is only indexing the static content in the SWF. The questions/answers are pulled from a mySQL database. So, Google doesn't index 99% of the content. Should I re-do the site in HTML/Javascript and hard-code the questions for each puzzle? (This would be a challenge as I don't know javascript worth squat.) Or should I hard-code the questions in XML and put them in the Flex app? If I put the questions in an XML file it's roughly 500 KB. Other ideas? c) Should I switch ad networks? (I currently get about 100 visitors a day). My ad network pays so little that if I were to make even $500/month, I would need 550,000 ad views/month, which seems impossible. If I go ahead and switch ad networks, I need to find one that allows iFrames as I've got a Flex website. Which ad networks permit their ads to be shown in iFrames? d) Should I cut and run? I put a lot of work into this project and it would really stink to get nothing out of it. I'm looking for some good advice. Looking forward to your suggestions. Thank you. -Laxmidi

    Read the article

  • Reusing Web Forms across BPM Roles

    - by Mona Rakibe
    Recently Varsha(another BPM Product Manager) approached me with a requirement where she wanted to reuse same Web Form for different task activity.We both knew this is easily achievable.The human task outcomes can differ to distinguish the submission based on roles.Her requirement was slightly more than this, she wanted to hide some data based on the logged in user. If you have worked on Web Form rules, dynamically showing and hiding data is common requirement and easily achievable using Form Rules. In this case the challenge was accessing BPM role inside the Web Form. Although, will be addressing this requirement in future release she wanted a immediate solution(Aha, after all customers are not the only one's who can not wait). Thankfully we managed to come-up with a solution and I hope this will be helpful to larger audience. Solution has 3 steps : Step 1: We added a hidden attribute in our form (Role). The purpose of this attribute is just to store the current logged in user's role and we pass the value during data association. Step 2 : In your data association step, pass the role value based on the Swimlane Step 3 : Now use this hidden attribute value in your Web Form rule for dynamic behavior Detailed steps and sample can be downloaded from Java.net.

    Read the article

  • What .NET objects should I use to create a cookie based session in MVC?

    - by makerofthings7
    I'm writing a custom password reset application that uses a validation technique that doesn't fit cleanly with ASP.NET Membership Provider's challenge questions. Namely I need to invoke a workflow and collect information from the end user (backup phone number, email address) after the user logs in using a custom form. The only way I know to create a cookie-based session (without too much "innovation" on my part) is to use WIF. What other standard objects can I use with ASP.NET MVC to create an authenticated session that works with non-windows user stores? Ideally I can store "role" or claim information in the session object such as "admin", "departmentXadmin", "normalUser", or "restrictedUser" The workflow would look like this: User logs in with username and password If the username and pw are correct a (stateless) cookie based session is created The user gets redirected to a HTML form that allows them to enter their backup phone number (for SMS dual factor), or validate it if already set. The user can then change their password using the form provided The "forgot password" would look like this User requests OTP code to be sent to the phone User logs in using username and OTP If the OTP is valid and not expired then create a cookie based session and redirect to a form that allows password reset Show password reset form, and process results.

    Read the article

  • How do client-server cooperation based games like Diablo 3 work?

    - by edgar
    Diablo 3 cooperates with Blizzard servers even during single player games. In fact, Blizzard has had problems with the games "melting their servers." I would like to ask: How do the client and the server communicate? What details does the client leave to the server, and vice versa? What details are redundant - both the client and the server know - and how often do they disagree? The previous paragraph contains the important questions, but I have a few more that I must explain my motivation towards. I am interested in the programming of botting. Ethical botting - I don't plan on actually abusing the automation to run 24/7. I just find it to be a great programming challenge to glean information from a game, and then make decisions from that information. I am stuck in the starting gate. The unofficial questions from this post would be: How can I make a bot (language, tools, libraries)? Can I get information through the communication between client and server, rather than the brute force pixel detection easily used in more static games? There probably is a trust issue, and to that all I can say is that I promise not to abuse the answers. But please feel free to answer any of the questions you feel comfortable with. Thank you!

    Read the article

  • Hurry! See the uncensored OOW videos before they get edited!

    - by rickramsey
    source Uploaded so far: Which Oracle Solaris 11 Technologies Have Sysadmins Been Using Most? Director's Cut - Uncensored - Markus Flierl, VP Solaris Core Engineering, describes how Oracle Solaris 11 customers are taking advantage of the Image Packaging System and the snapshot capability of ZFS to run more frequent updates of not only the OS, but also the applications (agile development, anyone?), and how they're using the network virtualization capabilities in Oracle Solaris 11 to isolate applications and manage workloads on the cloud. Watch How Hybrid Columnar Compression Saves Storage Space Director's Cut - Uncensored - Art Licht shows how hyprid columnar compression (HCC) compresses data 30x without slowing down other queries that the database is performing. First he shows what happens when he runs database queries without HCC, then he shows what happens when he runs the queries with HCC. Security Capabilities and Design in Oracle Solaris 11 Director's Cut - Uncensored - Compliance reporting. Extended policy. Immutable zones. Three of the best minds in Oracle Solaris security explain what they are, what customers are doing with them, and how they were engineered. Filmed at Oracle Open World 2012. Why DTrace and Ksplice Have Made Oracle Linux 6 Popular with Sysadmins Use the DTrace scripts you wrote for Oracle Solaris on Oracle Linux without modification. Wim Coekaerts, VP of Engineering for Oracle Linux, explains how this capability of DTrace, the zero downtime updates enabled by KSplice, and other performance and stability enhancements have made Oracle Linux 6 popular with sysadmins. Why Solaris 11 Is Being Adopted Faster Than Solaris 10 Sneak Preview - Uncut Version - Lynn Rohrer, Director of Oracle Solaris Product Management explains why customers are adopting Oracle Solaris 11 at a faster rate than Oracle Solaris 10, and proves why you should never challenge a Montana woman to a test of strength. What Forsythe Corp Is Helping Its Customers Do With Oracle Solaris 11 Director's Cut - Unedited - Lee Diamante, Solutions Architect for Forsythe Corp, an Oracle Solaris Partner, explains why Forsythe has been recommending Oracle Solaris to its customers, and what those customers have been doing with it. Lots more to come ... - Rick Website Newsletter Facebook Twitter

    Read the article

  • Podcast Show Notes: Architect Meet-Up

    - by Bob Rhubart
    What happens when you get bunch of architects together and just let them talk? The latest ArchBeat Podcast features just such a conversation. The four participants in this conversation responded to a general invitation to my list of some three dozen Usual Suspects to join me on Skype for what I call a virtual meet-up. That conversation took place on March 20, 2012. The Participants Basheer Khan: Oracle ACE Director; Founder, President & CEO at Innowave Technology Lucas Jellema: Oracle ACE Director; CTO of AMIS Services Eric Stephens: a director of Enterprise Architecture at Oracle Derek Sharpe: director of Oracle’s Fusion Middleware Architecture Team The Conversation Listen to Part 1: Meeting the Mobile Challenge The conversation focuses on Oracle ADF Mobile and the challenges of defining a mobile strategy for the enterprise. Listen to Part 2: Mobile Security, Availability, and Usability (April 4) The conversation turns to the security, availability, and usability challenges in the evolution of the mobile enterprise. Listen to Part 3 Evolving Software Development Roles (April 11) The panel closes out the discussion with a look at the interplay between developers and architects, and the evolving nature of both roles.

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Developing wheel reinventing tendencies into a skill as opposed to reluctantly learning wheel-finding skills? [duplicate]

    - by Korey Hinton
    This question already has an answer here: Is reinventing the wheel really all that bad? 20 answers I am more of a high-level wheel reinventor. I definitely prefer to make use of existing API features built into a language and popular third-party frameworks that I know can solve the problem, however when I have a particular problem that I feel capable of solving within a reasonable time I am very reluctant to find someone else's solution. Here are a few reasons why I reinvent: It takes time to learn a new API API restrictions might exist that I don't know about Avoiding re-work of unfamiliar code I am conflicted between doing what I know and shifting to a new technique I don't feel comfortable with. On one hand I feel like following my instincts and getting really good at solving problems, especially ones that I would never challenge myself with if all I did was try to find answers. And on the other hand I feel like I might be missing out on important skills like saving time by finding the right framework and expanding my knowledge by learning how to use a new framework. I guess my question comes down to this: My current attitude is to stick to the built-in API and APIs I know well* and to not spend my time searching github for a solution to a problem I know I can solve myself within a reasonable amount of time. Is that a reasonable balance for a successful programmer? *Obviously I will still look around for new frameworks that save time and solve/simplify difficult problems.

    Read the article

  • Routing tables don't show ppp0 after 12.04 kernel upgrade to 3.5.0: Haier CE682 modem configuration

    - by ubunsteve
    I'm trying to get my Haier CE682 EVDO modem, model number 201e:1022 to work in ubuntu 12.04 kernel 3.5.0-030500-generic #201207211835 . I had it working in a previous 12.04 kernel, using compat-wireless and these instructions http://zulkhamsyahmh.blogspot.com/2012/05/install-smartfren-haier-ce682-on-ubuntu.html, and to get it working had to edit the routing tables so that there was a ppp0 showing up, as suggested at http://www.linuxquestions.org/questions/slackware-14/wvdial-is-connecting-but-im-unable-to-do-anything-714861/ Network manager doesn't work with this modem, so I use either wvdial or gpppon to connect to it, both which work (after I run the command sudo modprobe usbserial vendor=0x201e product=0x1022 ) This is the output of when I connect with gpppon to the modem: Using interface ppp0 Connect: ppp0 <-- /dev/ttyUSB0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfAck id=0x1 ] rcvd [LCP ConfReq id=0x2 ] sent [LCP ConfAck id=0x2 ] sent [LCP EchoReq id=0x0 magic=0x819c86db] rcvd [CHAP Challenge id=0x1 <1ac8f12799e953967a3cc222c9254690, name = ""] sent [CHAP Response id=0x1 <6f12a903dc40915ca2761c17b87f8fbd, name = "smart"] rcvd [LCP EchoRep id=0x0 magic=0x0] rcvd [CHAP Success id=0x1 ""] CHAP authentication succeeded CHAP authentication succeeded sent [CCP ConfReq id=0x1 ] sent [IPCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x1 ] sent [IPCP ConfAck id=0x1 ] rcvd [CCP ConfReq id=0x1] sent [CCP ConfAck id=0x1] rcvd [CCP ConfRej id=0x1 ] sent [CCP ConfReq id=0x2] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [CCP ConfAck id=0x2] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] not replacing existing default route via 192.168.3.1 local IP address 10.191.248.154 remote IP address 10.17.95.25 primary DNS address 10.17.3.244 secondary DNS address 10.17.3.245 as you can see there is a problem with "not replacing existing default route via 192.168.3.1" This it the out put of route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.3.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.3.0 * 255.255.255.0 U 2 0 0 wlan0 I had tried these commands, which had previously worked in the earlier kernel: route del default route add default ppp0 but that broke my wireless internet connection. I then added the default routing as shown above with sudo route add default gw 192.168.3.1 wlan0 So it seems I need to add or change the routing to show a ppp0 connection, but I don't know how to do that.

    Read the article

  • Validate that a Checkbox is checked using javascript

    - by H(at)Ni
    I was facing a challenge yesterday that I was creating a Visual webpart and I wanted to validate the a submit button is only visible if the user checked a "I agree to terms" checkbox. Something was weired that I tested my code on a normal asp.net website and it worked perfectly while it had a different behaviour inside the webpart which is whenever I check the checkbox, the button is enabled but it will not fire the asp.net validators in client side. It posts back the page and then the validators appear after that. So, I tried to change my type of thinking and I reached a different solution is that to call a javascript function whenever the button is clicked and then check if the checkbox is clicked or not. To illustrate more, here are an example to what I'm saying: 1. Button in aspx page: <asp:Button OnClientClick="CheckForCondition();"  ValidationGroup="CompaniesSection" ID="btnCompaniesSubmit"                         runat="server" Text="Submit" /> 2. CheckForCondition() function: <script language="javascript" type="text/javascript">                         function CheckForCondition() {                             if ($jq('#<%= ChkCompanyCheck.ClientID %>:checked').val() == undefined) {                                 $jq('#lblCheckBox').show();                                 return false;                             }                             else {                                 $jq('#lblCheckBox').hide();                                 return true;                             }                         }                      </script> 3. lblCheckBox is simply a label that shows a red asterisk beside the checkbox to indicate that it's a required field. <label id="lblCheckBox" style="color:Red;display:none">*</label>

    Read the article

  • JRuby and JVM Languages at JavaOne!

    - by Yolande Poirier
    "My goal with my talks at JavaOne is to teach what is happening at the JVM level and below so people understand better where we are going" explains Charles Nutter, Jruby project lead. In this interview, Charles shared the JRuby features he presented at the JVM Language Summit. They include foreign function interface (FFI), IO layer, character transcoding, regular expressions, compilers, coroutines, and more.  At JavaOne, he will be presenting:  Going Native: Bringing FFI to the JVM The Java Native Runtime (JNR) is a high-speed foreign function interface (FFI) for calling native code from Java without ever writing a line of C. Based on the success of JNR, JDK Enhancement Proposal (JEP) 191 will bring FFI to OpenJDK as an internal API.  The Emerging Languages Bowl: The Big League Challenge In this panel discussion, these emerging languages are portrayed by their respective champions, who explain how they may help your everyday life as a Java developer. Script Bowl 2014: The Battle Rages On In this contest, languages that run on the JVM, represented by their respective language experts, battle for most popular language status by showing off their new features. Audience members will also vote on a language that should not return in 2015. Returning from 2013 are language gurus representing Clojure, Groovy, JRuby, and Scala.

    Read the article

  • Clean MVC design when there is viewer latency

    - by Tony Suffolk 66
    It isn't clear if this question has already been answered, so apologies in advance if this is a duplicate : I am implementing a game and trying to design around a clean MVC pattern - so my Control plane will implement the rules of the game (but not how the game is displayed), and the View plane implements how the game is displayed, and user iteraction - i.e. what game items or controls the user has activated. The challenge that I have is this : In my game the Control Plane can move game items more or less instaneously (The decision about what item to place where - and some of the initial consequences of that placement are reasonably trivial to calculate), but I want to design the Control Plane so that the View plane can display these movements either instaneously or using movement animations. The other complication is that player interaction must be locked out while those game items are moving (similar to chess - you can't attack an opposing piece as it moves past one of your pieces) So do I : Implement all the logic in the Control Plane asynchronously - and separate the descision making from the actions - so the Control plane decides piece 'A' needs to move to a given place - tells the view plane, and but does not implement the move in data until the view plane informs the control plane that the move/animation is complete. A lot of interlock points between the two layers. Implement all the control plane logic in one place - decisions and movement (keeping track of what moved where), and pass all the movements in one go to the View plane to do with what it will. Control Plane is almost fire and forget here. A hybrid of 1 & 2 - The control plane implements all the moves in a temporary data store - but maintains a second store which reflects what is actually visible to the viewer, based on calls and feedback from the View plane. All 3 are relatively easy to implement (target language is python), but having never done a clean MVC pattern with view latency before - I am not sure which design is best

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >