Search Results

Search found 1484 results on 60 pages for 'practical'.

Page 32/60 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • Ubuntu impossible to install: "unable to find a medium containing a live filesystem"

    - by Lorenzo
    Yes, I have read questions and answers with similar titles for this issue, which prevented me from installing Ubuntu for several MONTHS now, trying to figure it out. I have a MacBookPro with triple partition (one for Mac Snow Leopard, one for Windows 7, one for Linux) created with reFIT firmware (not BootCamp). I set up the system according to these instructions for reFIT: http://lifehacker.com/5531037/how-to-triple+boot-your-mac-with-windows-and-linux-no-boot-camp-required Now. There is a free partition ready to accept Linux into its arms, but Linux does not want to participate. Most answers to the issue "unable to find a medium containing a live filesystem" point to changing the BIOS booting system (which I don't know how to do, especially using this reFIT booting system), and to changing the socket of the USB (which does not concern me, since I am using a CD, actually I tried with a CD, then with a DVD, since a blank CD is only 700MB while the iso image file of Ubuntu is about 731MB). Anyway. This is what happens: I am in the Mac system (using the Mac partition) I insert the DVD with the burned image of Ubuntu (yes I have tried burning it again and agin on both CD and DVD blank discs). I restart the computer. When reFIT loads, I hold down the ALT key until the CD image appears. I select it, and hit Enter. A small Ubuntu icon appears at the bottom of the screen. Then a Ubuntu sign appears in the middle of the screen with small dots underneath, lighting up progressively over and over to indicate it's loading. Then everything turns black and the following message appears, at the end of a few lines of text: "unable to find medium with live file system". Please provide very practical suggestions on what to do to an unexperienced wannabe Ubuntu very patient user. Please start by saying how do I access the BIOS setup from reFIT bootup, and exactly what and why I need to try and change. (Will this mess up my reFIT bootup?) And anything else I need to do to finally be able to install Ubuntu. Thanks

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • APress Deal of the Day - 1/June/2012 - Introducing Visual C# 2010

    - by TATWORTH
    Today's $10 Deal of the Day from APress at http://www.apress.com/9781430231714 is Introducing Visual C# 2010."If you're new to C# programming, this book is the ideal way to get started. Respected author Adam Freeman guides you through the C# language by carefully building up your knowledge from fundamental concepts to advanced features." Adam Freeman is an excellent author. This is an excellent introduction to C# programming and a manual for those with experience. Having read through book, I am very impressed by its practical approach to C#. I cannot improve on the by-line "Get started on your C# journey with an expert by your side leading by example" Adam Freeman teaches C# by precept and example. I suspect he drives a Volvo C30 as it comes up in many of the code examples! Throughout the book there are numerous links back and forth so as to avoid over complicating the current topic. I have have no hesitation in recommending this book both to programmers starting out with C# and to the seasoned professional. It is a book that should be on every C# development team's book shelf.

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • Join Domain and Dos App

    - by Austin Lamb
    ok, So First off yes i have read all the related topics and those fixs are either out of date or dont work. i am running ubuntu 12.04 and i would like to add it to the win2008 server network, after i get that done i would like to mount the F:\ drive of the server somewhere on my linux machine where it can be identified as Drive F:\ by wine or Dosemu if i can achieve all of that i need to find out how to run a MS-Dos 16-bit Point-of-sales Graphic program in ubuntu whether that be through wine, dosemu, or dosBox. it does not matter it just has to be able to read and write to the servers F: drive, operate the dos app, and support LPt1 (i think) for printing reciepts and loading tickets. i am a decently knowledgeable windows tech, at least thats what my job description says.. but this is my first encounter with linux in a work environment, it could prove to very experience changing if i can just prove it as a practical theory and a reasonable solution, and get it to work.. the first step is to get it joined to the domain. i have likewise-open CLI and GUI versions, samba, and GADMIN-SAMBA installed in attempts to get any of them to work. any help in any area is greatly appreciated, especially with the domain joining since it is the first step and what i thought would be the easiest step..

    Read the article

  • There's A Virtual Developer Day in Your Future

    - by OTN ArchBeat
    What are Virtual Developer Days? You really should know this by now. OTN Virtual Developer Days are online events created specifically for developers and architects, with a focus on no-fluff technical presentations, hands-on labs, and expert Q&A to sharpen your technical skills and bring you up to speed on the latest information on Oracle products and practical best practices for their use. The best part about OTN Virtual Developer Days is that you don't have to pack a suitcase or stand in line at an airport waiting for someone pat you down. Instead, you stay where you are, flip open your laptop, and prepare your brain for a massive skills injection. In the next few weeks you'll have two such chances to ramp up your skills. On Tuesday November 5, 2013 Harnessing the Power of Oracle WebLogic and Oracle Coherence will guide you through tooling updates and best practices for developing applications with WebLogic and Coherence as target platforms. This two-track event covers app design and development (Track 1) and building, deploying, and managing applications (Track 2). Each track includes three presentations plus a hands-on lab. [9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT] Register now This event will also be available in EMEA on December 3, 2013 {9am-1pm GMT / 1pm-5pm GST / 2:30pm-6:30 PM IST] On Tuesday November 19, 2103 Oracle ADF Development: Web, Mobile, and Beyond offers four tracks covering everything from the basics to advance skills for for application development using Oracle ADF and Oracle ADF Mobile. There are three sessions in each track, followed by hands-on labs in which try out what you've learned. [9am-1pm PT / 12pm-4pm ET/ 1pm-5pm BRT] Register now This event will also be available in APAC on Thursday November 21, 2013 [10am-1:30pm IST (India) / 12:30pm-4pm SGT (Singapore) / 3:30pm-7pm AESDT] and in EMEA on Tuesday November 26, 2013 [9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST] Registration for both events is absolutely free. So what are you waiting for?

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • TellagoStudio's presenting SOA Governance on the Microsoft platform using SO-Aware at Microsoft TechReady.

    - by Vishal
    Hi there, Microsoft is hosting the first edition of their annual TechReddy conference. TechReady is an internal Microsoft conference but Microsoft invited Tellago Studios to present a session about how to enable Agile SOA Governance on the Microsoft platform using our recently release product: SO-Aware. As part of our session, we will take a look at the current challenges that organizations face when enabling SOA governance capabilities on the Microsoft platform and how organizations can benefit from  more agile, lightweight and modern SOA governance models. The session will provide a practical view to the role of Tellago Studios' SO-Aware as an essential technology to enable native SOA governance on the Microsoft platform. We will explore in detail important capabilities of SO-Aware such as Centralized service repository Centralized configuration management Service testing Monitoring Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others But the fun doesn't stop there..... As part of this session, we will showcase for the first time our upcoming SO-Aware Test Workbench product which enables load and functional web service testing capabilities on the Microsoft technology stack. SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • Benefits of Masters of Engineering Professional Practice for the lowly (yet aspiring) programmer

    - by Peter Turner
    I've been looking into in state online degree programs 'to fit my busy lifestyle' (i.e. three children, wife and hour and a half commute). One interesting one I've found is that Master of Engineering in Professional Practice. It looks more useful and practical than a MBA in project management. I'll contact the admission dept there about the specifics. But here I'm just asking in general. Do the courses in this degree apply to software engineering/development in even an abstract sense. The university I'm looking at does not have a Software Engineering major in the school of engineering. I'm not interested in architecture astronomy, but I am interested in helping my company succeed and being able to communicate technical information at a high and effective level as well as being able to lead my co-programmers toward a more robust end product. So my multipart question is: What might be the real benefit to me and my brain and How do I convince my boss (the owner of the company, who does do some tuition reimbursement) that just because it doesn't say anything about software that it might still do us some good? Oh, and how do I get past the fact that a masters degree would make me more qualified to be the project manager than... the project manager? (who is my supervisor)

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • What features are helpful when performing remote debugging / diagnostics?

    - by Pemdas
    Obviously, the easiest way to solve a bug is to be able to reproduce it in-house. However, sometimes that is not practical. For starters, users are often not very good at providing you with useful information. Customer Service: "what seems to be the issue?" User: "It crashed!" To further compound that, sometimes the bug only occurs under certain environmentally conditions that can not be adequately replicated in-house. With that in mind, it is important to build some sort of diagnostic framework into your product. What types of built-in diagnostic tools have you used or seen used? Logging seems to be the predominate method, which makes sense. We have a fairly sophisticated logging frame work in place with different levels of verbosity and the ability to filter on specific modules (actually we can filter down to the granularity of a single file). Error logs are placed strategically to manufacture a pretty good representation of a stack trace when an error occurs. We don't have the luxury of 10 million terabytes of disk space since I work on embedded platforms, so we have two ways of getting them off the system: a serial port and a syslog server. However, an issue we run into sometimes is actually getting the user to turn the logs on. Our current framework often requires some user interaction.

    Read the article

  • What's cool about Lisp nowadays? [closed]

    - by Kos
    Possible Duplicates: Why is Lisp useful? Is LISP still useful in today's world? Which version is most used? First of all, let me clarify: I'm aware of Lisp's place in history, as well as in education. I'm asking about its place in practical application, as of 2011. The question is: What features of Lisp make it the preferred choice for projects today? It's widely used in various AI areas as far as I know, and probably also elsewhere. I can imagine projects choosing, for instance... Python because of its concise, readable syntax and it being dynamic, Haskell for being pure functional with a powerful type system, Matlab/Octave for the focus on numerics and big standard libraries, Etc. When should I consider Lisp the proper language for a given problem? What language features make it the preferred choice then? Is its "purity and generality" an advantage which makes it a better choice for some subset of projects than the modern languages? edit- On your demand, a little rephrase (or simply a tl;dr) to make this more specific: a) What problems are solvable with Lisp much more easily than with more common, modern languages like Python or C# (or even F# or Scala)? b) What language features specific for Lisp make it the best choice for those problems?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-06

    - by Bob Rhubart
    Oracle Technology Network Architect Day - Boston, MA - 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free, but seating is limited. You'll curse yourself if you miss this one. Register now. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." Using FMAP and AnalyticsRes in a Oracle BI High Availability Implementation | Christian Screen "The fmap syntax has been used for a long time in Oracle BI / Siebel Analytics when referencing images inherent in the application as well as custom images," says Oracle ACE Christian Screen. "This syntax is used on Analysis requests an dashboards." More on Embedded Business Intelligence | David Haimes David Haimes give an example of Timeliness as "one of the three key attributes required for BI to be considered embedded BI." Thought for the Day "Architect: Someone who knows the difference between that which could be done and that which should be done. " — Larry McVoy Source: Quotes for Software Engineers

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • Remote Debugging

    - by Pemdas
    Obviously, the easiest way to solve a bug is to be able to reproduce it in house. However, sometimes that is not practical. For starters, users are often not very good at providing you with useful information. Costumer Service: "what seems to be the issue?" User: "It crashed!" To further compound that, sometimes the bug only occurs under certain environmentally conditions that can not be adequately replicated in house. With that in mind, it is important to build some sort of diagnostic framework into your product. What type of solutions have you seen or used in your experience? Logging seems to be the predominate method, which makes sense. We have a fairly sophisticated logging frame work in place with different levels of verbosity and the ability to filter on specific modules (actually we can filter down to the granularity of a single file). Error logs are placed strategically to manufacture a pretty good representation of a stack trace when an error occurs. We don't have the luxury of 10 million terabytes of disk space since I work on embedded platforms, so we have two ways of getting them off the system: a serial port and a syslog server. However, an issue we run into sometimes is actually getting the user to turn the logs on. Our current framework often requires some user interaction.

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Does an inexperienced programmer need an IDE?

    - by Torben Gundtofte-Bruun
    Reading this other question makes me wonder if I (as an absolute beginner PHP programmer) should stick with WAMP and Notepad++ or to switch to some IDE like Eclipse. It's understandable that skilled developers will benefit from a big shiny IDE. But why should an absolute beginner use an IDE? Do the benefits outweigh the extra challenge of learning the IDE on top of learning to develop? Update for clarification: My goal is to get some basic programming experience. By choosing PHP and WAMP (and FogBugz and Kiln) I hope to avoid having to navigate the tricky / messy OS specifics and compiling etc. and just focus on basic functionality like an online user registration form. I've got lots of theoretical understanding from university a decade ago but no practical experience. I want to remedy that with a hobby project that would be similar to a real-world sellable web app. There are so many questions to ask. So many pitfalls I probably have to blunder into. This question is just one piece (my first!) of that puzzle.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >