Search Results

Search found 31762 results on 1271 pages for 'js future software'.

Page 59/1271 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • 7 of the Best Free Linux Medical Imaging Software

    <b>LinuxLinks:</b> "Now, let's explore the 7 imaging software at hand. For each title we have compiled its own portal page, a full description with an in-depth analysis of its features, a screenshot of the software in action, together with links to relevant resources and reviews."

    Read the article

  • MBO in software development

    - by Euphoric
    I just found out our middle sized software development company is going to implement MBO. As a big fan of Agile, I see this as counter-intuitive, because I believe it is impossible to create a objective, that is measurable. Especialy where creativity, experience, knowledge and profesionalism are important traits and expanding those are objectives and goals of many. So my question is, what is your opinion on using MBO in software development? And if there are development companies using MBO, either successfuly or unsucessfuly.

    Read the article

  • Search Engine Optimization Software Tools

    In the Internet marketing field there are several essential search engine optimization software tools available for purchase on the Internet today. Professional SEO software tools identify niche markets for digital products for sale and allow the user analyze the marketplace and assess the competition.

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • Software for Managing Subscriptions to Website Content?

    - by an00b
    Can you recommend a package that allows me to manage subscriptions to certain content on my website (not necessarily displayable) based on payment levels? Ideally, the software would allow logging in using both site-specific registration and PayPal/Facebook/Twitter/MyOpenId, etc. Preferably, it would also be open source, LAMP-based. One idea that I have in mind is hacking a shopping cart software like Zen-Cart but this may be an overkill if a non-shopping lighter-weight package exists.

    Read the article

  • App Engine Hangout - chat with an App Engine Software Engineer in Test

    App Engine Hangout - chat with an App Engine Software Engineer in Test We'll be chatting with Robert Schuppenies, who is an App Engine Software Engineer in Test. He'll describe a bit about what he does, and talk about/demo some App Engine test frameworks, like the testbed module, code.google.com and code.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Software Developers

    A Software Developer is a person who analyzes the problem and gathers the information about a particular program. And then on the basis of the analysis the programmer makes error free software which ... [Author: Petter Martine - Computers and Internet - April 11, 2010]

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software? Thanks.

    Read the article

  • Using Domain name in EULA of a software rather than my name in the Licensor field

    - by user17330
    I intend to sell a software solution.I have already registered a domain but i dont have a registered company.Can i use my website/domain name eg:myproduct.com for the licensor field in the EULA rather than using myname.I will renew my domain yearly is there a problem with this.Do you know any software companies that work like this.Im confused about the users point of view will they find it a bit different. Please help me out.

    Read the article

  • 7 of the Best Free Linux Bible Software

    <b>LinuxLinks:</b> "Now, let's explore the 7 Bible software at hand. For each title we have compiled its own portal page, providing a screenshot of the software in action, a full description with an in-depth analysis of its features, together with links to relevant resources and reviews."

    Read the article

  • How do you track third-party software licenses?

    - by emddudley
    How do you track licenses for third-party libraries that you use in your software? How did you vet the licenses? Sometimes licenses change or libraries switch licenses--how do you stay up to date? At the moment I've got an Excel spreadsheet with worksheets for third-party software, licenses, and the projects we use them on (organized like a relational database). It seems to work OK, but I think it will go out-of-date pretty quickly.

    Read the article

  • Will software development be completely outsourced to Asia?

    - by gablin
    Lots of things are nowadays done in Asia which used to be done in the U.S. or Europe: car manufacturing, making of clothes and shoes, almost all computer components, etc. And it looks that part of software development is heading the same way. In fact, it is already underway. Where I previously worked as a functional tester for a 150+ developers-project, almost a third of the positions were outsourced to India. Will this process go further and further until all software development is done in Asia?

    Read the article

  • Senior software developer

    - by Ahmed
    Hello , I'm not sure if this is the place of my question or not I'm working in a software company as senior software engineer , my team leader is controlling everything in the development life cycle, I can't say my opinion in any thing I'm just forced to tasks only without any discussion I can't even apply any design patterns that i see it is better or any UI guidelines Is That is OK in my career position now ? what is the responsibilities of senior engineer ?

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software?

    Read the article

  • Alternatives to Marin Software for ppc management? [closed]

    - by Skyao
    Does anyone have suggestion for ppc management tool similar to Marin Software but is much cheaper? Marin Software Enterprise charges a minimum of several thousand dollars per month. The functionality needed is as follows: Keyword creation and management - Campaign Management Automated bidding and roi tools - Reporting and analytics Ability to upload/download customized revenue data any suggestions would be appreciated..thanks

    Read the article

  • How to run software, that is not offered though package managers, that requires ia32-libs

    - by Onno
    I'm trying to install the Arma 2 OA dedicated server on a Virtualbox VM so I can test my own missions in a sandbox environment in a way that lets me offload them to another computer in my network. (The other computer is running the VM, but it's a windows machine, and I didn't want to hassle with its installation) It needs at least 2, and preferably 4GB of ram, so I thought I would install the AMD64 version of ubuntu 13.10 to get this going. 'How do you run a 32-bit program on a 64-bit version of Ubuntu?' already explained how to install 32bit software though apt-get and/or dpkg, but that doesn't apply in this case. The server is offered as a compressed download on the site of BI Studio, the developer of the Arma games. Its installation instructions are obviously slightly out of date with the current state of the art. (probably because the state of the art has been updated quite recently :) ) It states that I have to install ia32-libs, which has now apparently been deprecated. Now I have to find out how to get the right packages installed to make sure that it will run. My experience level is like novice-intermediate when it comes to these issues. I've installed a lot of packages though apt-get; I've solved dependency issues in the past; I haven't installed much software without using package managers. I can handle myself with basic administrative work like editing conf files and such. I have just gone ahead and tried to install it without installing ia32-libs through apt-get but to install gcc to get the libs after all. My reasoning being that gcc will include the files for backward compatibility coding and on linux all libs are (as far as I can tell) installed at a system level in /libs . So far it seems to start up. (I can connect with the game server trough my in-game network browser, so it's communicating) I'm not sure if there's any dependency checking going on when running the game server program, so I'm left with a couple of questions: Does 13.10 catch any calls to ia32libs libraries and translate the calls to the right code on amd64? If it runs, does that mean that all required libraries have been loaded correctly, or is there a change of it crashing later on when a library that was needed is missing after all? Is it necessary to do a workaround such as installing gcc? How do I find out what libraries I might need to run this software? (or any other piece of 32-bit software that isn't offered through a package manager)

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Senior software

    - by Ahmed
    Hello , I'm not sure if this is the place of my question or not I'm working in a software company as senior software engineer , my team leader is controlling everything in the development life cycle, I can't say my opinion in any thing I'm just forced to tasks only without any discussion I can't even apply any design patterns that i see it is better or any UI guidelines Is That is OK in my career position now ? what is the responsibilities of senior engineer ?

    Read the article

  • SEnuke Review - Is This Software Good Enough?

    If there's such useful SEO software which is able to govern totally on the search engines, SEnuke is the answer. SEnuke plays an important role as a social bookmarking tool. SEnuke is the new SEO software available in the internet developed by Joe Russell and Areeb Bajwa.

    Read the article

  • SEnuke Review - Is This Software Good Enough?

    If there's such useful SEO software which is able to govern totally on the search engines, SEnuke is the answer. SEnuke plays an important role as a social bookmarking tool. SEnuke is the new SEO software available in the internet developed by Joe Russell and Areeb Bajwa.

    Read the article

  • Oracle Exalogic Elastic Cloud Software 2.0

    - by Robert Baumgartner
    Am Mittwoch den 25. Juli 2012 um 19:00 wird die neu Version der Oracle Exalogic Cloud Software 2.0, dem Engenieered System für den Oracle WebLogic Server, vorgestellt. Learn how Oracle Exalogic Elastic Cloud Software 2.0 can help your company: Close business up to 10x faster Protect sensitive data with complete application isolation Rapidly respond to market needs by provisioning applications 6x faster Maximize availability and productivity with 2x faster Näheres siehe Register now

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >