Search Results

Search found 4922 results on 197 pages for 'sarp architecture'.

Page 159/197 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Power Your Cloud with Oracle Fusion Middleware

    - by user753488
    Introducing the biggest and most strategic event for Fusion Middleware this year: Power your Cloud with Oracle Fusion Middleware. Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing. Join us for a special kickoff on Wednesday, June 29th in Chicago for the first event in North America. This event features an exclusive keynote from Rick Schultz, VP of Technology Product Marketing. Cloud is certainly all the rage. But what can we make of it? According to Alex Andrianopoulos, Vice President Product Marketing for Fusion Middleware states, “Not since Java was unveiled have we seen something so transformative hit the industry. The promised benefits of Cloud are many, significant, and deliver value to both IT organizations as well as the Line of Business. The benefits range from lower data center costs, to significantly reduced environmental impact, to the ability to capture more of the opportunities that market present through increased agility in resource deployment and dramatically reduced time to market.” With an ROI so promising, why isn’t everyone on Cloud already? It’s a question a lot of IT managers are struggling with. While the promised benefits of Cloud computing can be immense, achieving them requires much more than the adoption of a new architecture, or the virtualization of servers, or the outsourcing of some or all of the IT resources. These may be useful steps towards moving to a Cloud computing blueprint, but on their own do not deliver Cloud computing and its associated benefits to the enterprise. This is exactly what we’ll be addressing in the event series, ways you can leverage Complete, Open and Integrated capabilities of Oracle Fusion Middleware today to get one step closer to Cloud. Whether you’re: Leveraging Exalogic Elastic Cloud to consolidate your applications Improving agility with Oracle SOA to generate a foundation for shared data services Securing and managing your Cloud using Oracle Identity Management and Oracle Enterprise Manager Migrating from mainframe to Cloud using Oracle Tuxedo, Coherence and GoldenGate Building applications in the Cloud swiftly and easier with Oracle’s WebCenter Suite Join us for the first of its kind event in Chicago this week by registering now, or find an event near you. Learn more about Oracle Fusion Middleware and Cloud computing today on the Oracle.com website by going to http://www.Oracle.com/goto/Middleware4Cloud

    Read the article

  • MySQL Connect - Save The Date!

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Oracle today announced that it will hold the MySQL Connect Conference on September 29 and 30 in San Francisco! You can read the Press Release here. MySQL Connect will be jam-packed with technical sessions, hands-on labs and Birds of a Feather (BOF) sessions delivered by MySQL community members, users, customers and MySQL engineers from Oracle. The event is a unique opportunity to learn about the latest MySQL features, discuss product roadmaps, and connect directly with the engineers behind the latest MySQL code. The conference will include six tracks: Performance and Scalability, High Availability, Cloud Computing, Architecture and Design, Database Administration, and Application Development. The call for papers will open on April 16, 2012 for approximately three weeks. MySQL users and community members are encouraged to submit session proposals. Start thinking about your proposals! Registration will also open on April 16. @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.pressBullet, li.pressBullet, div.pressBullet { margin: 0cm 0cm 0.0001pt 36pt; text-indent: -18pt; font-size: 11pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • links for 2011-01-12

    - by Bob Rhubart
    WebCenter Spaces 11g PS2 Template Customization (Javier Ductor's Blog) "Recently, we have been involved in a WebCenter Spaces customization project. A customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look and feel as in the prototype..." Javier Ductor (tags: oracle otn webcenter enteprise2.0) Matt Carter: Risky Business "Incorporating risk detection and mitigation capabilities into apps is becoming all the rage. There are plenty of real-life examples of cases where prevention of cyber-security threats and fraudsters might have kept governments and companies out of the news, and with more money in their accounts." (tags: oracle otn security middleware) John Brunswick: 5 Surprisingly Good Benefits of Corporate Blogs "Some may still propose that not all corporations are going to be able to provide the five benefits above and are more focused around shameless self promotion of products and services.  If that is the case, that corporation is most likely not producing something of high value." - John Brunswick (tags: oracle otn enterprise2.0 blogging) InfoQ: IT And Architecture: Inside-Out Perspectives The software industry is in disarray, costs are escalating, and quality is diminishing. Promises of newer technologies and processes and methodologies in IT are still far from materializing on any significant scale. Bruce Laidlaw and Michael Poulin - each with more than 30 years of experience compared notes on the past and present of IT and provide insights on what IT needs to make progress. (tags: ping.fm) SOA & Middleware: Canceling a running composite instance - example Useful tips from Niall Commiskey. (tags: soa middleware oracle) BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations (Oracle E-Business Suite Technology) "A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year: the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3." -- Steven Chan (tags: oracle bpel) Marc Kelderman: OSB: Deploy Service Level Agreement (SLA), aka Alert Rule "The big issue with these SLAs is the deployment. If you have dozens of services, with multiple operations, and you have a lot of environments it takes a while to create them...[But] I have a nice workaround." - Mark Kelderman  (tags: oracle otn soa osb sla) @myfear: Java EE 7 - what's coming up for 2012? First hints. "Even if the actual Java EE 6 version is still not too widespread, we already have seen the first signs of the next EE 7 version written to the sky." -- Markus "myfear" Eisele (tags: oracle otn oracleace java)

    Read the article

  • eSTEP Newsletter October 2012 now available

    - by uwes
    Dear Partners,We would like to inform you that the October '12 issue of our Newsletter is now available.The issue contains information to the following topics:News from CorpOracle Announces Oracle Solaris 11.1 at Oracle OpenWorld; Oracle Announces Oracle Exadata X3 Database In-Memory Machine; Oracle Enterprise Manager 12c introduces New Tools and Programs for Partners; Oracle Unveils First Industry-Specific Engineered System - the Oracle Networks Applications Platform,;  Oracle Unveils Expanded Oracle Cloud Offerings; Oracle Outlines Plans to Make the Future Java During JavaOne 2012 Strategy Keynote; Some interesting Java Facts and Figures; Oracle Announces MySQL 5.6 Release Candidate Technical Section What's up with LDoms (4 tech articles); Oracle SPARC T4 Systems cut Complexity, cost of Cryptographic Tasks; PeopleSoft Enterprise Financials 9.1; PeopleSoft HCM 9.1 combined online and batch benchmark,; Product Update Bulletin Oracle Solaris Cluster Oct 2012; Sun ZFS Storage 7420; SPARC Product Line Update; SPARC M-series -  New DAT 160 plus EOL of M3000 series; SPARC SuperCluster and SPARC T4 Servers Included in Enterprise Reference Architecture Sizing Tool; Oracle MagazineLearning & EventsRecently delivered Techcasts: An Update after the Oracle Open World, An Update on OVM Server for SPARC; Update to Oracle Database ApplianceReferencesBridgestone Aircraft Tire Reduces Required Disk Capacity by 50% with Virtualized Storage Solution; Fiat Group Automobiles Aligns Operational Decisions with Strategy by Using End-to-End Enterprise Performance Management System; Birkbeck, University of London Develops World-Class Computer Science Facilities While Reducing Costs with Ultrareliable and Scalable Data Infrastructure How toIntroducing Oracle System Assistant; How to Prepare a ZFS Storage Appliance to Serve as a Storage Device; Migrating Oracle Solaris 8 P2V with Oracle Database 10.2 and ASM; White paper on Best Practices for Building a Virtualized SPARC Computing Environment, How to extend the Oracle Solaris Studio IDE with NetBeans Plug-Ins; How I simplified Oracle Database 11g Installation on Oracle Linux 6You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • Gauging Maturity of your BPM Strategy - part 1 / 2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this post I will discuss the essence of maturity assessment and the business imperative for doing the same in the context of BPM. Social psychology purports that an individual progresses from being a beginner to an expert in a given activity or task along four stages of self-awareness: Unconscious Incompetence where the individual does not understand or know how to do something and does not necessarily recognize the deficit and may even deny the usefulness of the skill. Conscious Incompetence where the individual recognizes the deficit, as well as the value of a new skill in addressing the deficit. Conscious Competence where the individual understands or knows how to do something but demonstrating the skill requires explicit concentration. Unconscious Competence where the individual has had so much practice with a skill that it has become "second nature" and serves as a basis of developing other complementary skills. We can extend the above thinking to an organization as a whole by measuring an organization’s level of competence in a specific area or capability, as an aggregate of the competence levels of individuals it is comprised of. After all organizations too like individuals, evolve through experience, develop “memory” and capabilities that are shaped through a constant cycle of learning, un-learning and re-learning. Hence the key to organizational success lies in developing these capabilities to enable execution of its strategy in-line with the external environment i.e. demand, competition, economy etc. However developing a capability merits establishing a base line in order to Assess the magnitude of improvement from past investments Identify gaps and short-comings Prioritize future investments in the right areas A maturity assessment is essentially an organizational self-awareness check that is aimed at depicting the “as-is” snapshot of an existing capability in-order to guide future investments to develop that capability in-line with business goals. This effectively is the essence of a maturity Organizational capabilities stem through its architecture, routines, culture and intellectual resources that are implicitly and explicitly embedded in its business processes. Given that business processes underpin realization of organizational capabilities, is what has prompted business transformation and process management efforts. Thus, the BPM capability of an organization needs to be measured on an on-going basis to ensure delivery of its planned benefits. In my next post I will describe Oracle’s BPM Maturity assessment methodology.

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • MS in Computer Science after BE in electronics

    - by Abhinav
    I am doing my 3rd year Bachelors in Electronics and Electrical Communication but from the first year I have been interested in Computer Science. But at that time it was just my hobby. But in second year when I joined robotics my love for computer science rose. I with my team came in top three in 2 National Competition (Technical fests of different IITs) where we used Image Processing, Hardware interfacing etc. But then I realised that Computer Science is not just about coding. I took many lectures from online free schools like Udacity, Coursera in subjects related to Artificial Intelligence, Building a Search Engine, Design and Analysis of Algorithm, Programming a Robotic Car, Programming Languages, Machine Learning, Software Engineering as a Service, WebApps Engineering, Compilers, Applied Crypotography etc. I also did some courses in Core and Advanced Java in my second year from training institute. I will also be taking course in Statistics, Databases, Discrete Mathematics from 25th June. Now I realized how vast is the field of Computer Science and how efficient you become on deciding algorithms and classifying problems into different subfields which have been thoroughly researched so you don't always do brute force thing or naive programming. Now this field has become kind of passion for me. Adding to the fact I am also doing my 6 months internship in software field in Texas Instruments where I am working on Automation and Algorithms. I also have some 5-6 good college level projects in Softwares and Robotics. I also like Electronics but only some fields like Operating System(this subject was there in Electronics also), Micro Processor, Digital, Computer Architecture, DSPs etc. I really want to pursue MS in some field of Computer Science. I am giving GRE in October/November. Till now I have good CG of around 9.4/10 and my 1 year in college is still left. Do I have any chance that some good University in US will consider me for MS in field related to computer science or Robotics. Also Can you suggest somethings that I can do during this 1 year to increase my chances for MS or should I apply for EECS(Electrical Engineering and Computer Science) and then I can shift more towards Computer Science as my major option. My main aim is to do Phd after Ms in CS if I am able to do that somehow. I know that I have to put much extra effort to understand things in MS than CS undergraduates but I will do that with my full dedication, also when I communicate with my college CS students or during my internship period I didn't feel that I am missing very much stuff that they know and was very comfortable during my internship with software employees.

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • 50 Billion Served: Java Embedded on Devices

    - by Tori Wieldt
    It doesn't matter if it is 50 billion or 24 billion, just suffice it to stay that there will be MANY connected devices in the year 2020. With just 24 billion devices, they will outnumber humans six to one! So as a developer, you don't want to ignore this opportunity. What if you could use your Java skills and deploy an app to a fraction of these devices (don't be greedy, how about just, say, 118,000 of them)? Fareed Suliman, Java ME Product Manager had lots of good news for Java Developers in his presentation Modernizing the Explosion of Advanced Microcontrollers with Embedded Java at ARM TechCon in Santa Clara, CA last week. "A radical architecture shift is underway in this space, from proprietary to standards-based," he explained.  He pointed out several advantages to using Embedded Java for devices: Java is a proven and open standard. Java provides connectivity, encryption, location, and web services APIs. You don't have to focus on and keep reinventing the plumbing below the JVM. Abstracting the software from the hardware allows you to repeat your app across many devices. Abstracting the software from the hardware allows allows parallel development so you can get your app done more quickly. You already know Java (or you can hire lots of Java talent). Java is a full ecosystem, with Java Embedded plugins for IDEs like Eclipse and NetBeans. Java ME allows for in-field software upgrades. Suliman mentioned two ways developers can start using Java Embedded today:  Java ME Embedded Suite 7.0 Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to embedded devices such as gateways and concentrators. Oracle Java ME Embedded 3.2 Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. Think tiny. Really tiny. And think big.  Read more about Java Embedded at the Oracle Technology Network, and read The Java Source blog Java Embedded Releases from September.

    Read the article

  • 3 Reasons You Need To Know Something About Every Technology

    - by Tim Murphy
    I make my living as a consultant and a general technologist.  I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done.  While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines.  Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it. So what are the main benefits of expending the effort to learn a new technology? New ways to solve problems Accelerate development Advise clients and get new business opportunities By new technology I mean ones that you haven’t had experience with before.  They don’t have to be the the one that just came out yesterday.  As they say, those who do not learn from history are bound to repeat it.  If you can learn something from an older technology it can be just as valuable as the shiny new one.  Either way, when you add another tool to your kit you get a new view on each problem you face.  This makes it easier to create a sound solution. The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems.  Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use.  These can usually be applied with most languages.  You just needed to be exposed to them. The last point is about helping your clients and helping yourself.  If you can get in on technologies early you will have advantage over your competition in the market.  You will also be able to honestly advise you client on why they should or should not go with a new product.  Being able to compare products and their features is always an ability that stake holders appreciate. You don’t need to learn every detail of a product.  Learn enough to function and get an idea of how to use the technology.  Keep eating those technology Wheaties and you will be ready to go the distance in any project. del.icio.us Tags: Technology,technologists,technology generalist,Software Architecture

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Why does switching users completely hang my system every time?

    - by Stéphane
    I have a fresh install of 11.04 64bit, with 2 administrator accounts and 4 normal accounts. The 4 normal accounts (the kids' accounts) don't have passwords, they can login simply by clicking on their names. When any of the users -- either admin or normal -- tries to switch to another account by clicking in the top-right corner of the screen and selecting another user, the screen goes black and the entire system locks up. Even CTRL+ALT+F1 through F7 does nothing. This is reproducible 100% of the time on this system. I can ssh into the box when the console locks up, and by running top, I see that Xorg is consuming about 100% of the CPU. Looking at the output of "ps axfu" in bash while the system is in this "locked up" state, here is the lightdm and X process tree: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1153 0.0 0.1 183508 4292 ? Ssl Dec26 0:00 lightdm root 2187 0.4 4.6 265976 164168 tty7 Ss+ 00:43 0:21 \_ /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch stephane 2612 0.0 0.3 266400 10736 ? Ssl 01:52 0:00 \_ /usr/bin/gnome-session --session=ubuntu stephane 2650 0.0 0.0 12264 276 ? Ss 01:52 0:00 | \_ /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/gnome-session --session=ubuntu stephane 2703 0.8 3.0 562068 106548 ? Sl 01:52 0:08 | \_ compiz stephane 2801 0.0 0.0 4264 584 ? Ss 01:52 0:00 | | \_ /bin/sh -c /usr/bin/compiz-decorator stephane 2802 0.0 0.3 265744 13772 ? Sl 01:52 0:00 | | \_ /usr/bin/unity-window-decorator ...cut... root 3024 80.6 0.3 107928 13088 tty8 Rs+ 01:53 12:34 \_ /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch That last process, pid #3024 in this case, is what has the CPU pegged. In case it matters (I suspect it might) here is what I think may be the relevant information for my video card, taken from /var/log/Xorg.0.log: [ 3392.653] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so [ 3392.653] (II) Module glx: vendor="FireGL - AMD Technologies Inc." [ 3392.653] compiled for 6.9.0, module version = 1.0.0 ... [ 3392.655] (II) LoadModule: "fglrx" [ 3392.655] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so [ 3392.672] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 3392.672] compiled for 1.4.99.906, module version = 8.88.7 [ 3392.672] Module class: X.Org Video Driver ... [ 3392.759] (==) fglrx(0): ATI 2D Acceleration Architecture enabled [ 3392.759] (--) fglrx(0): Chipset: "AMD Radeon HD 6410D" (Chipset = 0x9644) Lastly: I did see this posting: Change user on 11.10 hangs system ...but I checked, and the libpam-smbpass package isn't installed on this system.

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Microsoft Azure News: Capturing VM Images

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/05/21/microsoft-azure-news-capturing-vm-images.aspxIf you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive. No more! A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server. And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >