Search Results

Search found 1504 results on 61 pages for 'pros and cons'.

Page 49/61 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • TechEd 2012: Recap

    - by Tim Murphy
    TechEd this week was a great experience and I wanted to wrap it up with a summary post. First let me say a thank you to John and Jeff from GWB for supplying power, connectivity and a place to work in between sessions.  The blogging hub was a great experience in itself.  Getting to talk with other bloggers and other conference goers turned into a series of interesting conversations.  And where else can you almost end up in the day 1 highlights video? The sessions at TechEd were a mixed bag of value.  The Keynotes rocked, both figuratively and literally and most of the sessions that I want to were a good experience and had gems of information to take away.  There were a few exceptions though.  A couple of the sessions turned out to be sales jobs.  Nothing turns me off more than that (there will be some really honest comments on those surveys). TechEd re-enforced for me that much of the value is not in the sessions, but in the networking opportunities. I got to talk with several Microsoft team members and MVPs as well as some of the vendor representative for companies like Inrule and ComponentOne. Also got to expand both my local and extended community with discussions at meal times and waiting for sessions to start. I think this is one of the benefits that a lot of people don’t take advantage of in these conferences that should be a bigger part of the advertising. Exposure to a wide variety of topics, many of which I had not been able to make time for up to this point was envigorating.  The list of topic includes: Office 365, Windows Server 2012, Windows 8, Metro, Azure.  I can’t wait to get back to work and dig into these subjects in more depth. The one complaint that I had and heard from other attendees was that there weren’t enough sessions that were actually about development.  I realize that TechEd started as an event for IT Pros, but there needs to be more value for the Devs.  It all went by too fast and it will take a couple more days to digest the material, but the batteries are and I’m ready to leverage what I’ve learned.  Hopefully we will do it again next year. del.icio.us Tags: TechEd,TechEd 2012

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • Microsoft BUILD 2013&ndash;Day 1 Summary

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013ndashday-1-summary.aspx I’m happy to be at BUILD this week, mainly because my flights finally got me here late on Tuesday.  My biggest complaints so far are the flights and the hotel.  It seems that almost every flight into San Francisco were delayed multiple hours.  The Sequester so lovingly forced on America by congress means that the airport was short controllers.   That, along with poor weather and airport construction meant most people were 2-3 hours late arriving.  Add on top of that the fact that the hotel that I picked durring registration is absolutely horrid.  It looks like something out of a ghost hunters show and smells like it too.  I think if Microsoft is going to select a hotel they need to make sure that it is adequate. Rant over! So what happened the first day?  Steve Balmer started off the keynote along with Julie Larson-Green and a cast of others.  We finally found out that there were around six thousand people attending BUILD and that the focus this year would be Windows 8, Windows Phone 8 and Azure.  For the rest of the keynote I am going to have a separate post. You can’t have a Microsoft conference without some fun.  This year they have a hunt for pins that represent different gestures in Windows 8.  I got all of mine.  Now they just need to pull my name. The sessions I attended were really good. They covered live tiles, what’s new in XAML and building Windows Phone UIs presented by Kraig Brockschmidt, Tim Heuer and Shawn Oster respectively.  These will also be covered in separate posts. The exhibit area was interesting, but somewhat disappointing.  TechEd 2012 I think was better organized and better staffed by the vendors.  It also seemed that the Microsoft teams’ booths were also in need of some organization and staffing. Overall it was a really fun day capped off by all six thousand attendees standing in like to get their Acer 8” tables and Surface Pros.  What a day!  Stay tuned for follow up posts. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote

    Read the article

  • How do I take responsibility for my code when colleague makes unnecessary improvements without notice?

    - by Jesslyn
    One of my teammates is a jack of all trades in our IT shop and I respect his insight. However, sometimes he reviews my code (he's second in command to our team leader, so that's expected) without a heads up. So sometimes he reviews my changes before they complete the end goal and makes changes right away... and has even broken my work once. Other times, he has made unnecessary improvements to some of my code that is 3+ months old. This annoys me for a few reasons: I am not always given a chance to fix my mistakes He has not taken the time to ask me what I was trying to accomplish when he is confused, which could affect his testing or changes I don't always think his code is readable Deadlines are not an issue and his current workload doesn't require any work in my projects other than reviewing my code changes. Anyways, I have told him in the past to please keep me posted if he sees something in my work that he wants to change so that I could take ownership of my code (maybe I should have said "shortcomings") and he's not been responsive. I fear that I may come off as aggressive when I ask him to explain his changes to me. He's just a quiet person who keeps to himself, but his actions continue. I don't want to banish him from making code changes (not like I could), because we are a team--but I want to do my part to help our team. Added clarifications: We share 1 development branch. I do not wait until all my changes complete a single task because I risk losing some significant work--so I make sure my changes build and do not break anything. My concern is that my teammate doesn't explain the reason or purpose behind his changes. I don't think he should need my blessing, but if we disagree on an approach I thought it would be best to discuss the pros and cons and make a decision once we both understand what is going on. I have not discussed this with our team lead yet as I would prefer to resolve personal disagreements without getting management involved unless it is necessary. Since my concern seemed more of personal issue than a threat to our work, I chose to not bother the team lead. I am working on code review process ideas--to help promote the benefits of more organized code reviews without making it all about my pet peeves.

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • How do you feel about being asked to code during an interview?

    - by Mystere Man
    I have seen a lot of comments about good interview questions and puzzles to require potential developers to solve during the interview process. I have personally had several interviews in which the interviewer has asked me to write some piece of code or solve a problem during the interview, and I have always performed very poorly in these "tests". The reason is simple, as a developer who spends my days talking to computers, I find I have to prepare myself and "switch gears" to be in "interview mode". I prepare myself to make a good impression. When I'm programming, I'm very focused and am totally different from when I'm being "interpersonal". I just can't get into "the zone" when I'm also having to be a charming and witty potential employee. I feel that by asking a developer to prove his skills during an interview, all you're doing is finding out if they can code under pressure, and at the drop of a hat. It has almost no ability to determine how you would perform in a "real life" development situation. Maybe, if you're looking for someone that can code and chat at the same time, i can see how that would be beneficial. But I think you overlook potential candidates that simply do not perform well in such an artificial environment. While I appreciate that a potential employer wants to see what I can do, I don't think an interview is the place for such a test. I mean, suppose a job for an over the road trucker required that you drive while being interviewed. How does that really end well? So I'm curious as to what others think about such situations. Have you failed interviews because you were not in the right frame of mind? Have you failed to make a good interpersonal impression because you were too distracted trying to solve the problem? If you're a hiring manager, or someone that gives interviews, do you even think about such things? Is it really important that someone perform well in an interview? EDIT: To clarify, I'm not against testing applicants. My concern is about testing during an interview. See also: What are the pros and cons for the employer of code questions during an interview? looking at this from the interviewer's point of view.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • Can't Copy to Clipboard from Vim

    - by maksim
    I'm running Vim 7.3 under Linux Mint 13 (using MATE) and I'm not able to save text to the system clipboard. I run Vim in the terminal and copy text from the terminal with CTRLINSERT. When I select text in Vim (either with the mouse or in visual mode), CTRLINSERT doesn't copy any text. In addition when I right-click, Copy is grayed out. Further, I can't write to the system buffer by yanking to the corresponding register using vim commands. However, I'm able to paste while in insert mode (using SHIFTINSERT or right-click paste). I'm also able to copy text directly from the terminal using the same technique, just not text from Vim. Here is my current ~/.vimrc. The relevant part is most likely set clipboard=autoselect,unnamed,exclude:cons\|linux. If I put finish at the top of my ~/.vimrc, I have the same issue, so I think the line is wrong, but I've tried set clipboard=unnamed and had the same behavior. Could there be another config file affecting Vim's behavior? How can I change my ~/.vimrc to allow me to copy text from Vim?

    Read the article

  • Problems during an update of cPanel / WHM

    - by haron
    I ordered a Master WHM account with the couple CentOS / cPanel. whm-cpanel.eu.pn The installation is a fresh update of the basic services was necessary (had: WHM 11.15.0 cPanel 11.17.0 WHM X v3.1.0, Apache 1.3.37, PHP 4.4.7, MySQL 4.1.22). 1 / I started to update cPanel / WHM via the command: / scripts / upcp. Everything went well until the middle of installing the server stopped responding (or ping, or ssh). The installation appears to have continued alone to the end and after some time everything is back to normal (I do not know if there was a reboot) and my interface was updated (cPanel 11.24.4-R36167 - WHM 11.24.2 - X 3.9). 2 / Then I updated via the MySQL interface tweak this in WHM then the command: / scripts / mysqlup. Here everything went fine, no problem. 3 / Finally, I wanted to upgrade Apache 2.2 / PHP 5 and I used this command: / scripts / easyapache. After selecting all the packages and modules installation is started but the same as for point 1: the server did not answer more and this time the installation did not go through. Apache 2.2 is well spent (after the second try) but PHP has remained at 4. I tried several times the same operation without success. I do not think this is a memory problem, a free-m shortly before losing communications gave nothing alarming. By cons CPU time seemed to rise up. I reinstalled the machine again the trick, same problem! Whether via the WHM interface or by Shell, the installation stops short, for 15 minutes the machine is not responding and then everything returns to normal, but no update is done in PHP. Is there a known bug in this version of cPanel / WHM? Someone he met the same problem? If I compile Apache / PHP manually, without using the script easyapache is what I might encounter problems with cPanel later? Thank you!

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >