Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 125/570 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • Remember me or not?

    - by taeja87
    I was told to post this on webmasters instead of stackoverflow. Is it safe to have the remember me feature? Would it be somewhat safe (knowing it won't be 100% safe) to allow users to close their browser and come back still logged in? I am not exacting sure which way I should go after reading different things about safety. I learned about session fixation and implemented security to add more protection. From experience, if remember me is checked then only your username/email appears and requires you to re-enter your password. Other sites allow you to come in and out as much as you way without logging out after the browser has closed. If it is safe, what is the current best way of implementing remember/stay logged in? http://stackoverflow.com/questions/3531377/best-practise-for-remember-me-feature http://stackoverflow.com/questions/5087969/what-is-the-code-for-stay-logged-in-or-remember-me-while-user-login-in-php http://bytes.com/topic/php/answers/881197-stay-logged-remember-me-php-sessions-cookies http://security.stackexchange.com/questions/41/good-session-practices Also: The site I am working on is email & password login type.

    Read the article

  • Commenting on Commits

    The source code experience on CodePlex just got more social. Now, you can comment directly on a commit when viewing a project’s source code history. This feature acts as a nice compliment to the existing ability to comment on a pull request's commits. To get started with this feature, navigate to the commit you are interested in from the source tab. As you hover over the line you would like to comment on, an icon to add a comment will appear to the left. Click on the icon, and comment away!     Once you comment on a line, the person who committed the change and anyone involved in the conversation will be notified of the comment by email. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • Preferred way for dealing with customer-defined data in enterprise application

    - by Axarydax
    Let's say that we have a small enterprise web (intranet) application for managing data for car dealers. It has screens for managing customers, inventory, orders, warranties and workshops. This application is installed at 10 customer sites for different car dealers. First version of this application was created without any way to provide for customer-specific data. For example, if dealer A wanted to be able to attach a photo to a customer, dealer B wanted to add e-mail contact to each workshop, and dealer C wanted to attach multiple PDF reports to a warranty, each and every feature like this was added to the application, so all of the customers received everything on new update. However, this will inevitably lead to conflicts as the number of customers grow as their usage patterns are unique, and if, for instance, a specific dealer requested to have an ability to attach (for some reason) a color of inventory item (and be able to search by this color) as a required item, others really wouldn't need this feature, and definitely will not want it to be a required item. Or, one dealer would like to manage e-mail contacts for their employees on a separate screen of the application. I imagine that a solution for this is to use a kind of plugin system, where we would have a core of the application that provides for standard features like customers, inventory, etc, and all of the customer's installed plugins. There would be different kinds of plugins - standalone screens like e-mail contacts for employees, with their own logic, and customer plugin which would extend or decorate inventory items (like photo or color). Inventory (customer,order,...) plugins would require to have installation procedure, hooks for plugging into the item editor, item displayer, item filtering for searching, backup hook and such. Is this the right way to solve this problem?

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

  • Incremental search in Visual Studio

    - by Jalpesh P. Vadgama
    Visual studio is a Great IDE and there are still lots of feature that not known to developers. Incremental search is one of them. This is a amazing feature to find code in particular document and its available from Visual Studio 2010 and carried over in Visual Studio 2012. Incremental search allows developers to search in document without blocking UI and allow to search as they type. Interesting!! right.. So let’s open visual studio and see how it works. Once you open Visual Studio and press Ctrl + I and type something it will find the string without blocking your visual studio UI. Just like following. In the above code you can see that, I have typed Cons and You can see that whole console word is highlighted. Even you can see that find dialog box on top right corner of visual studio 2012 like following. Same way if you see the footer in visual studio where it is finding cons in current document like following. Isn’t that great now we find things very easily and we don’t have to remember whole word like Console. Hope you like it. Stay tuned for more updates.

    Read the article

  • Activity Stream

    [Do you tweet? Follow us on Twitter @matthawley and @codeplex] We deployed a new version of the CodePlex website yesterday.  Redesigned Home Page with Activity Stream In CodePlex we continuously look for ways to provide our users with the most recent and relevant information they are seeking. It is with this in mind that we released our latest feature, the home page activity stream. The activity stream showcases events taking place on projects you are a part of as well as projects you are following. There are many different events in the system that causes activities to be created, including starting a discussion, creating a work item etc.   All the functionality that was available on the former home page, such as creating a new project or finding a project that needs help, is available on the right side of the new home page.     The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • Git Branch Model for iOS projects with one developer

    - by glenwayguy
    I'm using git for an iOS project, and so far have the following branch model: feature_brach(usually multiple) -> development -> testing -> master Feature-branches are short-lived, just used to add a feature or bug, then merged back in to development and deleted. Development is fairly stable, but not ready for production. Testing is when we have a stable version with enough features for a new update, and we ship to beta testers. Once testing is finished, it can be moved back into development or advanced into master. The problem, however, lies in the fact that we can't instantly deploy. On iOS, it can be several weeks between the time a build is released and when it actually hits users. I always want to have a version of the code that is currently on the market in my repo, but I also have to have a place to keep the current stable code to be sent for release. So: where should I keep stable code where should I keep the code currently on the market and where should I keep the code that is in review with Apple, and will be (hopefully) put on the market soon? Also, this is a one developer team, so collaboration is not totally necessary, but preferred because there may be more members in the future.

    Read the article

  • High-level strategy for distinguishing a regular string from invalid JSON (ie. JSON-like string detection)

    - by Jonline
    Disclaimer On Absence of Code: I have no code to post because I haven't started writing; was looking for more theoretical guidance as I doubt I'll have trouble coding it but am pretty befuddled on what approach(es) would yield best results. I'm not seeking any code, either, though; just direction. Dilemma I'm toying with adding a "magic method"-style feature to a UI I'm building for a client, and it would require intelligently detecting whether or not a string was meant to be JSON as against a simple string. I had considered these general ideas: Look for a sort of arbitrarily-determined acceptable ratio of the frequency of JSON-like syntax (ie. regex to find strings separated by colons; look for colons between curly-braces, etc.) to the number of quote-encapsulated strings + nulls, bools and ints/floats. But the smaller the data set, the more fickle this would get look for key identifiers like opening and closing curly braces... not sure if there even are more easy identifiers, and this doesn't appeal anyway because it's so prescriptive about the kinds of mistakes it could find try incrementally parsing chunks, as those between curly braces, and seeing what proportion of these fractional statements turn out to be valid JSON; this seems like it would suffer less than (1) from smaller datasets, but would probably be much more processing-intensive, and very susceptible to a missing or inverted brace Just curious if the computational folks or algorithm pros out there had any approaches in mind that my semantics-oriented brain might have missed. PS: It occurs to me that natural language processing, about which I am totally ignorant, might be a cool approach; but, if NLP is a good strategy here, it sort of doesn't matter because I have zero experience with it and don't have time to learn & then implement/ this feature isn't worth it to the client.

    Read the article

  • How should a team share/store game content during development?

    - by irwinb
    Other than Dropbox, what out there has been especially useful for storing and sharing game content like images during development (similar feature set to Dropbox like working offline, automatic syncing and support for windows/osx)? We are looking into hosting our own SharePoint server but it seems to be really focused on documents... Maybe Box.net would work? EDIT For code, we are using Git. To be more precise, I was looking for an easy, automatic way for content produced by artists/audio engineers to be available to everyone. Features like approvals of assets don't hurt either. Following the answer linked by Tetrad, Alienbrain looked pretty interesting but..is way out of our budget (may be something to invest in in the future). What ended up doing... We were going to go with Box.net but downloading the sync apps for desktop use required us to wait to be contacted by them for some reason. We did not have much time to wait so we ended up going with Dropbox Teams. Box.net has a nice feature set but we never really felt held back without them. Thanks for the help :).

    Read the article

  • Using Subdomains for Newly Regional Company

    - by Taylord22
    The company I work for is expanding their business to new territories. I've got a lot of stabilization to do in the region/state where we're one of the most well known companies of our kind. Currently, we have 3 distinct product lines which are currently distinguished by 3 separate URLS. This is affecting the user flow of our site, so we'd like to clean it up before launching our products into the various regions. The business has decided to grow into 5 new states (one state consisting of one county only) — none of which will feature all 3 products. Our homebase state is the only one that will have all 3 products this year. My initial thought was to use subdomains to separate out the regions, that way we could use a canonical tag to stabilize the root domain (which would feature home state content, and support content for all regions), and remove us from potential duplicate content penalization. Our product content will be nearly identical across the regions for the first year. I second guessed myself by thinking that it was perhaps better to use a "[product].root/region" URL instead. And I'm currently stuck by wondering if it was not better to build out subdomains for products and regions...using one modifier or the other as a funnel/branding page into the other. For instance, user lands on "region.root.com" and sees exactly what products we offer in that region. Basically, a tailored landing page. Meanwhile the bulk of the product content would actually live under "product.root.com/region/page". My head is spinning. And while searching for similar questions I also bumped into reference of another tag meant to be used in some similar cases to mine. I feel like there's a lot of risks involved in this subdomain strategy, but I also can't help but see the benefits in the user flow.

    Read the article

  • Oracle ADF Mobile

    - by rituchhibber
    We are happy to announce that Oracle ADF Mobile is now available for our customers.Oracle ADF Mobile enables developer to build applications that install and run on both iOS and Android devices from one source code.Development is done with JDeveloper and ADF and leverages Java and HTML5 technologies, while keeping the same visual and declarative approach ADF is known for.Please Click here to read more about the Oracle ADF Mobile release and learn more on our OTN Page. Feature Highlights: Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: Use existing server-based UI framework like JSF. Use your own favorite HTML5 framework like JQuery. Use our declarative HTML5 component set provided with the framework. Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language!ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility!

    Read the article

  • Why does Ubuntu 12.10 Beta2 insist on commiting changes to the partition table?

    - by Uten
    Why does Ubuntu 12.10 Beta2 insist on commiting changes to the partition table even as no real changes has been done? This is a show stopper for me as I'm installing without a CD/DVD ROM. This is how I go about it. I downloaded the iso image and extracted vmlinuz and initrd.lz to the same folder I keep the iso image. Configured grub (0.9x) to boot /ubuntu/vmlinuz with the iso image like this: title ubuntu live-cd kernel /ubuntu/vmlinuz boot=casper iso-scan/filename=/ubuntu/ubuntu-12.10-beta2-desktop-i386.iso ro quiet splash initrd /ubuntu/initrd.lz boot This works well and I get a running livecd session. The iso image is mounted on /isomedia (or something similar). The spare HD space where I want to install Ubuntu is in the logical area (at the wery end of the disk). I have tried both to use the space as empty and preformated with ext4. After selecting the partition and selecting "use as ext4" and selecting a mountpoint (/) I get the message: "The installer needs to commit changes to partition tables, but cannot do so because partitions on the following mount points could not be unmounted" "/isomedia" (or something similar). Is this a "feature" of the installer? To insist that everything is unmounted even if no changes is nescesary (as fare as I understand). It's probably a safety feature but is it needed? I have cahnged layouts with parted and gparted (at the end of the disk) for years without any failures. I understand that booting the iso image like this is not the common way. But it is just such a beautifull way of doing it when you hav a running system and want to play with another. Any one had any success installing Ubuntu (12.10 beta2 ) like this? Best regards Uten

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Les spécifications de Java SE 7 et 8 proposées par Oracle sont adoptées « à une écrasante majorité » par le Comité Exécutif du JCP

    Les spécifications de Java SE 7 et 8 proposées par Oracle sont adoptées « A une écrasante majorité » par le Comité Exécutif du JCP Mise à jour du 07/12/10 Oracle vient d'annoncer que ses propositions pour les spécifications des deux prochaines versions de la plate-forme Java (ses « Java Specification Requests » ou (JSR)) ont été adoptées après un vote du Comité Exécutif du JCP. Oracle précise que ce vote, concernant les JSR 336 et 337, s'est soldé par une « écrasante majorité » en sa faveur, sans autres précision sur les chiffres du scrutin. La feuille de route, désormais approuvée, prévoit une standardisation de...

    Read the article

  • Simplify your Ajax code by using jQuery Global Ajax Handlers and ajaxSetup low-level interface

    - by hajan
    Creating web applications with consistent layout and user interface is very important for your users. In several ASP.NET projects I’ve completed lately, I’ve been using a lot jQuery and jQuery Ajax to achieve rich user experience and seamless interaction between the client and the server. In almost all of them, I took advantage of the nice jQuery global ajax handlers and jQuery ajax functions. Let’s say you build web application which mainly interacts using Ajax post and get to accomplish various operations. As you may already know, you can easily perform Ajax operations using jQuery Ajax low-level method or jQuery $.get, $.post, etc. Simple get example: $.get("/Home/GetData", function (d) { alert(d); }); As you can see, this is the simplest possible way to make Ajax call. What it does in behind is constructing low-level Ajax call by specifying all necessary information for the request, filling with default information set for the required properties such as data type, content type, etc... If you want to have some more control over what is happening with your Ajax Request, you can easily take advantage of the global ajax handlers. In order to register global ajax handlers, jQuery API provides you set of global Ajax methods. You can find all the methods in the following link http://api.jquery.com/category/ajax/global-ajax-event-handlers/, and these are: ajaxComplete ajaxError ajaxSend ajaxStart ajaxStop ajaxSuccess And the low-level ajax interfaces http://api.jquery.com/category/ajax/low-level-interface/: ajax ajaxPrefilter ajaxSetup For global settings, I usually use ajaxSetup combining it with the ajax event handlers. $.ajaxSetup is very good to help you set default values that you will use in all of your future Ajax Requests, so that you won’t need to repeat the same properties all the time unless you want to override the default settings. Mainly, I am using global ajaxSetup function similarly to the following way: $.ajaxSetup({ cache: false, error: function (x, e) { if (x.status == 550) alert("550 Error Message"); else if (x.status == "403") alert("403. Not Authorized"); else if (x.status == "500") alert("500. Internal Server Error"); else alert("Error..."); }, success: function (x) { //do something global on success... } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you can make ajax call using low-level $.ajax interface and you don’t need to worry about specifying any of the properties we’ve set in the $.ajaxSetup function. So, you can create your own ways to handle various situations when your Ajax requests are occurring. Sometimes, some of your Ajax Requests may take much longer than expected… So, in order to make user friendly UI that will show some progress bar or animated image that something is happening in behind, you can combine ajaxStart and ajaxStop methods to do the same. First of all, add one <div id=”loading” style=”display:none;”> <img src="@Url.Content("~/Content/images/ajax-loader.gif")" alt="Ajax Loader" /></div> anywhere on your Master Layout / Master page (you can download nice ajax loading images from http://ajaxload.info/). Then, add the following two handlers: $(document).ajaxStart(function () { $("#loading").attr("style", "position:absolute; z-index: 1000; top: 0px; "+ "left:0px; text-align: center; display:none; background-color: #ddd; "+ "height: 100%; width: 100%; /* These three lines are for transparency "+ "in all browsers. */-ms-filter:\"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)\";"+ " filter: alpha(opacity=50); opacity:.5;"); $("#loading img").attr("style", "position:relative; top:40%; z-index:5;"); $("#loading").show(); }); $(document).ajaxStop(function () { $("#loading").removeAttr("style"); $("#loading img").removeAttr("style"); $("#loading").hide(); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note: While you can reorganize the style in a more reusable way, since these are global Ajax Start/Stop, it is very possible that you won’t use the same style in other places. With this way, you will see that now for any ajax request in your web site or application, you will have the loading image appearing providing better user experience. What I’ve shown is several useful examples on how to simplify your Ajax code by using Global Ajax Handlers and the low-level AjaxSetup function. Of course, you can do a lot more with the other methods as well. Hope this was helpful. Regards, Hajan

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

  • SEO made easy with IIS URL Rewrite 2.0 SEO templates

    - by The Official Microsoft IIS Site
    A few weeks ago my team released the version 2.0 of the URL Rewrite for IIS . URL Rewrite is probably the most powerful Rewrite engine for Web Applications. It gives you many features including Inbound Rewriting (ie. Rewrite the URL, Redirect to another URL, Abort Requests, use of Maps, and more), and in Version 2.0 it also includes Outbound Rewriting so that you can rewrite URLs or any markup as the content is being sent back even if its generated using PHP, ASP.NET or any other technology. It also...(read more)

    Read the article

  • Challenges in multi-player Android Game Server with RESTful Nature

    - by Kush
    I'm working on an Android Game based on Contract Bridge, as a part of my college Summer Internship project. The game will be multi-player such that 4 Android devices can play it, so there's no BOT or CPU player to be developed. At the time of getting project, I realized that most of the students had already worked on the project but none of their works is reusable now (for variety of reasons like, undocumented code and design architecture, different platform implementation). I have experience working on several open source projects and hence I emphasis to work out on this project such that components I make become reusable as much as possible. Now, as the game is multi-player and entire game progress will be handled on server, I'm currently working on Server's design, since I wanted to make game server reusable such that any client platform can use it, I was previously confused in selecting Socket or REST for Game Server's design, but later finalized to work on REST APIs for the server. Now, since I have to keep all players in-sync while they make movements in game, on server I've planned to use Database which will keep all players' progress, specific for each table (in Bridge, 4 players play on single table, and server will handle many such game tables). I don't know if its an appropriate decision to use database as shared medium to track progress of each game table (let me know if there's an appropriate or better option). Obviously, when game is completed for the table, data for that table on server's database is discarded. Now the problem is that, access to REST service is an HTTP call, so as long as client doesn't make any request, server will remain idle, and consider a situation where A player has played a card on his device and the device requests to apply this change on the server. Now, I need to let rest of the three devices know that the player has played a card, and also update view on their device. AFAIK, REST cannot provide a sort-of Push-notification system, since the connection to the server is not persistent. One solution that I thought was to make each device constantly poll the server for any change (like every 56 ms) and when changes are found, reflect it on the device. But I feel this is not an elegant way, as every HTTP request is expensive. (and I choose REST to make game play experience robust since, a mobile device tends to get disconnected from Internet, and if there's Socket-like persistent connection then entire game progress is subject to lost. Also, portability on client-end is important) Also, imagining a situation where 10 game tables are in progress and 40 players are playing, a server must be capable to handle flooded HTTP requests from all the devices which make it every 56 ms. So I wonder if the situation is assumed as DoS attack. So, explaining the situation, am I going on the right track for the server design? I wanted to be sure before I proceed much further with the code.

    Read the article

  • SQLPeople Interviews Wrap Up January 2011 with Matt Velic

    - by andyleonard
    Introduction Late last year I announced an exciting new endeavor called SQLPeople . At the end of 2010 I announced the 2010 SQLPeople Person of the Year . Check out this interview with Matt Velic! SQLPeople is off to a great start. Thanks to all who have our first month awesome - those willing to share and respond to interview requests and those who are enjoying the interviews! Here's a wrap up of January 2011: January 2011 Interviews Matt Velic Cindy Gross Steve Fibich Tim Mitchell Jeremiah Peschka...(read more)

    Read the article

  • How do I serve dynamic WebDAV directory listings using Apache

    - by Jack Douglas
    I can use mod_rewrite to redirect /dynamic.php/xyz.php to /dynamic.php and then server different content for xyz.php using $_SERVER - where xyz.php is any arbitrary filename requested by a client. No problem so far. When a client connects to my WebDAV server they can list the contents of a directory, eg / or /dynamic.php/ - how do I intercept this request so I can dynamically generate a list of available files for the client (which requests this list using PROPFIND)?

    Read the article

  • Is there an SO API which can fetch all Questions & Answers for a particluar Keywords

    - by user4203
    I am looking for an API which helps in fetching all the Questions & Answers from SO and other Stack Exchange sites only on a particular "keyword". Later using XML RPC these questions will be posted as blog post and answers to this post's answers. Just wondering whether it's possible with an API. One of my friend suggested that we should Scrape but i don't want screen scraping instead i am looking for API requests which should handle this.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >