Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 523/751 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • Business Choices and Evony

    - by Robert May
    Recently, I’ve been playing a game called Evony, and I finally decided to quit the game and thought I should warn others who might be tempted.  I also find a lot of insight with this game as an example.  A few of the companies that I’ve worked with or worked for have been like this and they are NOT good places to be. Evony is a joke designed to milk as much money out of people as possible.  As a professional software developer who mentors teams on how to build better software, here's what I see: They obviously offshore all development and have little oversight over that offshore development, and they probably have a small team at that.  Evidenced by the poor grammar throughout the game. They're seeking to maximize revenue and pushing to do as little development as possible, which would mean a small team. They're horribly understaffed in the customer support department as evidenced by never replying to this forum and never responding to bug reports or help requests (I've had one open with no response AT ALL for over a month . . .) They have way inadequate testing, no CI, and probably no automated unit tests.  You can see this by the poor grammar throughout the game and the type of bugs that show up. They aren't following a formal development process (no Agile, Waterfall, or anything else) as evidenced by their lack of predictable release cycle and lack of visibility. I'm guessing that the internal code base is terrible, otherwise, there wouldn't be an "Age II" that had nothing more than a new visual interface and a few rule tweaks.  This is also evidenced by the itty bitty scope of bug fixes and their inability to really fix bugs. Their Architect sucks.  Really, 42k user is all you can handle on a single server?  Could you REALLY not come up with a better way to scale to handle users?  They've built isolated worlds, instead of a single continuous world. Back to milking people for money--to really progress, you have to spend money. All of this adds up to knowing, deliberate actions on the part of management.  They CHOOSE to do this (like AOL choosing to send more discs instead of improve quality). So, what can we learn? This game will never really improve, since the bosses don't care, they're only in it for the money. The game will never have good support.  Again, the owners don't care. Giving them money only perpetuates this scam (and yes, I've given them money, way too much money. :() They don't care if you quit.  There's a new sucker born every day. Don't EVER go to work for them.  I've worked both with and for people like this and the culture is NEVER good. Ah well. Technorati Tags: Evony

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • Career Development: What should I learn next after Python? and Why? [closed]

    - by Josh
    Hi all I'm currently learning Python. I want to know what should I learn next out of these programming langauages: PHP Actionscript 3 Objective-C (iPhone applications) I work in the Multimedia industry and have decided to learn Python as a first programming language seriously because I would like to learn the basics of programming, to mainly write scripts at work that Automate task (eg. Edit multiple XML files quickly) At work we have a senior developer who knows Actionscript and PHP very well (although knows PHP better). We also have been developing iPhone applications for 2 weeks, Our senior developer could learn it although we have lots of work currently with PHP and Actionscript 3 type work and haven't had time or reason to pick up iOS development. Here are the reasons I want to learn each language, But I cannot decide what I'll learn next: PHP: I want to learn PHP because it will help with Web Development. PHP is very wanted by employers. Senior developer at work writes everything in it web sites, CMS etc. (including XML checks and scripts), I will learn a lot from him (once I learn the basics). However, I don't want to learn Web because you have to deal with lots of cross-browser problems. Actionscript 3: At work we are looking to put on another developer to help with online activities and very small games (using Actionscript 3.0 and Flash CS5) for (eg. First Aid Activities etc) I would like to do things that have a element of design as I'm better at Photoshop then developing. I want to be creative, I like to interact with users in a fun way. Objective-C (iPhone applications): We are a all mac office, we may get more iPhone, iPad application work(jobs) that need to be created. Work has found it nearly impossible to find good iPhone developers. I like apple products (Macs and iPhones), I would like to make my own games, applications in my spare time(if I knew how). Should I learn Actionscript first because it would be easier to learn then Objective-C? Should I learn PHP because it is very widely used? Should I learn Objective-C because it is really wanted by employers now?

    Read the article

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Stop trying to be perfect

    - by Kyle Burns
    Yes, Bob is my uncle too.  I also think the points in the Manifesto for Software Craftsmanship (manifesto.softwarecraftsmanship.org) are all great.  What amazes me is that tend to confuse the term “well crafted” with “perfect”.  I'm about to say something that will make Quality Assurance managers and many development types as well until you think about it as a craftsman – “Stop trying to be perfect”. Now let me explain what I mean.  Building software, as with building almost anything, often involves a series of trade-offs where either one undesired characteristic is accepted as necessary to achieve another desired one (or maybe stave off one that is even less desirable) or a desirable characteristic is sacrificed for the same reasons.  This implies that perfection itself is unattainable.  What is attainable is “sufficient” and I think that this really goes to the heart both of what people are trying to do with Agile and with the craftsmanship movement.  Simply put, sufficient software drives the greatest business value.   I've been in many meetings where “how can we keep anything from ever going wrong” has become the thing that holds us in analysis paralysis.  I've also been the guy trying way too hard to perfect some function to make sure that every edge case is accounted for.  Somewhere in there, something a drill instructor said while I was in boot camp occurred to me.  In response to being asked a question by another recruit having to do with some edge case (I can barely remember the context), he said “What if grasshoppers had machine guns?  Would the birds still **** with them?”  It sounds funny, but there's a lot of wisdom in those words.   “Sufficient” is different for every situation and it’s important to understand what sufficient means in the context of the work you’re doing.  If I’m writing a timesheet application (and please shoot me if I am), I’m going to have a much higher tolerance for imperfection than if you’re writing software to control life support systems on spacecraft.  I’m also likely to have less need for high volume performance than if you’re writing software to control stock trading transactions.   I’d encourage anyone who has read this far to instead of trying to be perfect, try to create software that is sufficient in every way.  If you’re working to make a component that is sufficient “better”, ask yourself if there is any component left that is not yet sufficient.  If the answer is “yes” you’re working on the wrong thing and need to adjust.  If the answer is “no”, why aren’t you shipping and delivering business value?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-05

    - by Bob Rhubart
    OTN Architect Day - Boston - Sept 12: What to Expect If you've never attended an OTN Architect Day, here's a little preview. You start with a continental breakfast. Then you have keynotes by an Oracle expert, and a member of the Oracle ACE community. After that come the break-out sessions, so you have your choice of two sessions in each time slot. So you'll get in two breakouts before lunch. Then you eat. After that there's a panel Q&A during which the audience tosses questions at the assembled session speakers. Then it's on to another set of break-out sessions, followed by a short break. Then the audience breaks into small groups for round table discussions. After that there's a drawing for some cool prizes, followed by the cocktail reception. All that costs you absolutely zero. Register now. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger Martin Deh shares the background information and the solution to a recently encountered customer scenario. Proxy As Upgrade to 11g Does Not Like NQSession.User | Art of Business Intelligence "In Oracle BI 10g the application was a lot more tolerant of bad design and cavalier usage of variables," observes Oracle ACE Christian Screen. "We noticed an issue recently during an upgrade where the Proxy As configuration in Oracle BI 10g used the NQSession.User variable to identify the user logged into Presentation Servers acting as Proxy." Oracle WebLogic Server 11g: Interactive Quick Reference | Dirk Nachbar Oracle ACE Dirk Nachbar shares a quick post with information on a new interactive reference guide to Oracle WebLogic Server. "The Quick Reference shows you an architecural overview of the Oracle WebLogic Server processes, tools, configuration files, log files and so on including a short description of each section and the corresponding link to the Oracle WebLogic Server Documentation," says Nachbar. Thought for the Day "In fast moving markets, adaptation is significantly more important than optimization." — Larry Constantine Source: Quotes for Software Engineers

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Power Your Cloud with Oracle Fusion Middleware

    - by user753488
    Introducing the biggest and most strategic event for Fusion Middleware this year: Power your Cloud with Oracle Fusion Middleware. Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing. Join us for a special kickoff on Wednesday, June 29th in Chicago for the first event in North America. This event features an exclusive keynote from Rick Schultz, VP of Technology Product Marketing. Cloud is certainly all the rage. But what can we make of it? According to Alex Andrianopoulos, Vice President Product Marketing for Fusion Middleware states, “Not since Java was unveiled have we seen something so transformative hit the industry. The promised benefits of Cloud are many, significant, and deliver value to both IT organizations as well as the Line of Business. The benefits range from lower data center costs, to significantly reduced environmental impact, to the ability to capture more of the opportunities that market present through increased agility in resource deployment and dramatically reduced time to market.” With an ROI so promising, why isn’t everyone on Cloud already? It’s a question a lot of IT managers are struggling with. While the promised benefits of Cloud computing can be immense, achieving them requires much more than the adoption of a new architecture, or the virtualization of servers, or the outsourcing of some or all of the IT resources. These may be useful steps towards moving to a Cloud computing blueprint, but on their own do not deliver Cloud computing and its associated benefits to the enterprise. This is exactly what we’ll be addressing in the event series, ways you can leverage Complete, Open and Integrated capabilities of Oracle Fusion Middleware today to get one step closer to Cloud. Whether you’re: Leveraging Exalogic Elastic Cloud to consolidate your applications Improving agility with Oracle SOA to generate a foundation for shared data services Securing and managing your Cloud using Oracle Identity Management and Oracle Enterprise Manager Migrating from mainframe to Cloud using Oracle Tuxedo, Coherence and GoldenGate Building applications in the Cloud swiftly and easier with Oracle’s WebCenter Suite Join us for the first of its kind event in Chicago this week by registering now, or find an event near you. Learn more about Oracle Fusion Middleware and Cloud computing today on the Oracle.com website by going to http://www.Oracle.com/goto/Middleware4Cloud

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Booting Ubuntu on HP Pavilion g7 - 13.04 [duplicate]

    - by death2040
    This question already has an answer here: My computer boots to a black screen, what options do I have to fix it? 24 answers I have a HP Pavilion G7 with an AMD A4 processor and Radeon graphics. I want to install Ubuntu on my laptop but whenever I put the Ubuntu live CD in it and boot to it, the screen shows the Ubuntu logo and the four little dots then after about a minute or two the screen goes black. I can tell the screen is still on but it doesn't have anything on it. I'm beginning to wonder if its a driver problem but I can't really install the drivers when I cant even get Ubuntu to show anything except a loading screen. I've already tried using 12.04 and 12.10 and all the others down to Ubuntu 10. none of them worked. All the other versions don't even show the Ubuntu logo. I'd prefer to have Ubuntu 13.04 on it if its possible but I haven't had any luck finding a solution. I've also tried using WUBI installer in Windows 7 but all that did was make my computer slower for windows and it does the same with the screen when i boot it to Ubuntu. I'm trying to use Ubuntu alongside Windows 7. I cant find any solution on Google. It wont load anything and I know that there is a program called grub on Ubuntu that I used on my desktop computer when it had graphics trouble but the trouble with my desktop was minor things like the screen would flash and then show weird patterns on the screen. But I can't find anything on what to do with the HP laptop. Please help. I use this laptop a lot for games on Windows 7 and I just want to use Ubuntu for when I take my laptop to school and for school stuff. Edit: I just tried booting it in nomodeset and some other things and still didn't work. It did boot up but now when it goes to install alongside windows it crashes and says Ubuntu is forcing reboot or something like that Also, this question is different from the black screen at boot issue because when I do use nomodeset on my computer and select install Ubuntu it will go as far as the screen where you can choose to replace Windows or run alongside Windows. Then after I click continue it ejects the live CD and turns off my computer without installing anything. The error message it shows when it ejects the disk says signal 15, shutting down - modem manager [1675]: <info> Caught nm-dispatcher.action: Caught signal 15, shutting down... *Deconfiguring network interfaces... Please remove installation media and close the tray (if any) then press ENTER *Deactivating swap... *Stopping remaining crypto disks... *stopping early crypto disks... unmount: /run/lock: not mounted unmount: /run/shm: not mounted

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • Express your personality and potential @ Oracle

    - by jessica.ebbelaar(at)oracle.com
    Ciao, my name is Michel and I am a 24 year old guy from Forlì, Italy, working as a Business Intelligence Business Development Consultant in Rome. After I completed the Bachelor's Degree in Business Administration at Bologna University, I took a Multiple Master of Science in International Management organized by three European Universities: Bologna University (IT), ICN Business School of Nancy (FR) and Uppsala University (SE).I therefore had the chance to travel a lot and, most important, to study and meet hundreds of people from all over the world. This experience enhanced the passion I foster for international environments, different cultures and countries; not to mention the learning of foreign languages. Working for such a structured multinational as Oracle totally reflects my desire to be surrounded by a multicultural and international atmosphere, having the opportunity to grow from the personal point of view and to endlessly boost my career path. Demand Generation My department is responsible for demand generation activities. That implies, for instance, the implementation of various strategies aimed to feed the pipeline for Business Intelligence products in the Italian market. Organization of marketing campaigns, events, providing ideas or contacts to the sales force is just a few examples of our work. I like to define the role of the business development as something that translates the marketing insights into tools to increase the sales, accounting the differences amongst countries, companies and industries. Furthermore, it is an important feature to collaborate with the EMEA team to share knowledge and best practices. My initial lack of an IT background has been constantly covered by the managers and my personal mentor. The thing I appreciated most is indeed the fact I always feel to be a growing potential, becoming essential day after day. I am surprised by the trust and confidence people have on me and how they proudly encourage my personal initiative and always spur me to contribute. Career Ambitions If your ambitions are to work within an international but extremely people focused environment, to contribute to the growth of one of the most successful companies in the world, to deal with a fast-paced industry and highly competitive market, to have the chance to fully express your personality and potential and to satisfy your career ambitions over the years, then Oracle is right for YOU. Looking forward to having YOU aboard! Do you want to find out more about the open roles within Oracle? Follow us on http://campus.oracle.com.

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • How can I keep directories in sync

    - by Guillaume Boudreau
    I have a directory, dirA, that users can work in: they can create, modify, rename and delete files & sub-directores in dirA. I want to keep another directory, dirB, in sync with dirA. What I'd like, is a discussion on finding a working algorithm that would achieve the above, with the limitations listed below. Requirements: 1. Something asynchronous - I don't want to stop file operations in dirA while I work in dirB. 2. I can't assume that I can just blindly rsync dirA to dirB on regular interval - dirA could contain millions of files & directories, and terrabytes of data. Completely walking the dirA tree could take hours. Those two requirements makes this really difficult. Having it asynchronous means that when I start working on a specific file from dirA, it might have moved a lot since it appeared. And the second limitation means that I really need to watch dirA, and work on atomic file operations that I notice. Current (broken) implementation: 1. Log all file & directory operations in dirA. 2. Using a separate process, read that log, and 'repeat' all the logged operations in dirB. Why is it broken: echo 1 > dirA/file1 # Allow the 'log reader' process to create dirB/file1: log = "write dirA/file1"; action = cp dirA/file1 dirB/file1; result = OK echo 1 > dirA/file2 mv dirA/file1 dirA/file3 mv dirA/file2 dirA/file1 rm dirA/file3 # End result: file1 contains '1' # 'log reader' process starts working on the 4 above file operations: log = "write file2"; action = cp dirA/file2 dirB/file2; result = failed: there is no dirA/file2 log = "rename file1 file3"; action = mv dirB/file1 dirB/file3; result = OK log = "rename file2 file1"; action = mv dirB/file2 dirB/file1; result = failed: there is no dirB/file2 log = "delete file3"; action = rm dirB/file3; result = OK # End result in dirB: no more files! Another broken example: echo 1 > dirA/dir1/file1 mv dirA/dir1 dirA/dir2 # 'log reader' process starts working on the 2 above file operations: log = "write file1"; action = cp dirA/dir1/file1 dirB/dir1/file1; result = failed: there is no dirA/dir1/file1 log = "rename dir1 dir2"; action = mv dirB/dir1 dirB/dir2; result = failed: there is no dirA/dir1 # End result if dirB: nothing!

    Read the article

  • links for 2011-01-12

    - by Bob Rhubart
    WebCenter Spaces 11g PS2 Template Customization (Javier Ductor's Blog) "Recently, we have been involved in a WebCenter Spaces customization project. A customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look and feel as in the prototype..." Javier Ductor (tags: oracle otn webcenter enteprise2.0) Matt Carter: Risky Business "Incorporating risk detection and mitigation capabilities into apps is becoming all the rage. There are plenty of real-life examples of cases where prevention of cyber-security threats and fraudsters might have kept governments and companies out of the news, and with more money in their accounts." (tags: oracle otn security middleware) John Brunswick: 5 Surprisingly Good Benefits of Corporate Blogs "Some may still propose that not all corporations are going to be able to provide the five benefits above and are more focused around shameless self promotion of products and services.  If that is the case, that corporation is most likely not producing something of high value." - John Brunswick (tags: oracle otn enterprise2.0 blogging) InfoQ: IT And Architecture: Inside-Out Perspectives The software industry is in disarray, costs are escalating, and quality is diminishing. Promises of newer technologies and processes and methodologies in IT are still far from materializing on any significant scale. Bruce Laidlaw and Michael Poulin - each with more than 30 years of experience compared notes on the past and present of IT and provide insights on what IT needs to make progress. (tags: ping.fm) SOA & Middleware: Canceling a running composite instance - example Useful tips from Niall Commiskey. (tags: soa middleware oracle) BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations (Oracle E-Business Suite Technology) "A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year: the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3." -- Steven Chan (tags: oracle bpel) Marc Kelderman: OSB: Deploy Service Level Agreement (SLA), aka Alert Rule "The big issue with these SLAs is the deployment. If you have dozens of services, with multiple operations, and you have a lot of environments it takes a while to create them...[But] I have a nice workaround." - Mark Kelderman  (tags: oracle otn soa osb sla) @myfear: Java EE 7 - what's coming up for 2012? First hints. "Even if the actual Java EE 6 version is still not too widespread, we already have seen the first signs of the next EE 7 version written to the sky." -- Markus "myfear" Eisele (tags: oracle otn oracleace java)

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • How can I solve the same problems a CB-architecture is trying to solve without using hacks? [on hold]

    - by Jefffrey
    A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place).

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >