Search Results

Search found 73376 results on 2936 pages for 'set based'.

Page 28/2936 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How to reduce the time it takes to load my web game? [closed]

    - by Danial
    I created a puzzle game with Unity and uploaded it to one server. This works fine, but I bought a new server and uploaded my game to it as well. There, the loading time is much longer. These are the servers: http://pinheadsinteractive.com/Mozzie/ (fast) http://operation-mozzie-free.com/ (slow) The Unity files are exactly the same from one server to the next. My client is dissatisfied with the new, slow loading time. So, how can I reduce the time my Unity game takes to load? Even in some cases they faced the problem that they could not load the game at all. For the the moment, I'm using an iframe on the new sever as a workaround, but the issue still remains unsolved.

    Read the article

  • Logic in Entity Components Sytems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • Which game engine for HTML5 + Node.js

    - by Chrene
    I want to create a realtime multiplayer game using and HTML5. I want to use node.js as the server, and I only need to be able to render images in a canvas, play some sounds, and do some basic animations. The gameloop should be done in the server, and the client should do callback via sockets to render the canvas. I am not going to spend any money on the engine, and I don't want to use cocos2d-javascript.

    Read the article

  • Managing game objects/components

    - by Xeon06
    Good day everyone, By far the biggest problem that has always dawned on my when programming games is how to structure my code. It just becomes an incredible mess after a while. The reason for that is because I have no idea how different classes should interact with each other. Let's have an example. Say I have a class Player, a class PlayerInput and a class Map. The player class contains information as to the location of the player, whereas the player input class handles changing that location, but by first making sure it's within a walkable area from the map class. How to structure this? My usual approach is to pass those components as parameters in the constructors of the parameters that need them, like so: var map = new Map(); var player = new Player(); var input = new PlayerInput(player, map); The problem with that is that it quickly gets messy, when you add new components you have to go through your constructors and update them, and it doesn't work well if you have mirroring references: var physics = new Physics(input); //Oops, doesn't work var input = new Input(physics); So, how do you guys usually manage this? Thanks.

    Read the article

  • Is testable code actually more stable? [closed]

    - by Xodarap
    A google scholar search turns up numerous papers on testability, including models for computing testability, recommendations for how ones code can be more testable, etc. They all come with the assertion that more testable code is more stable, however I can't find any studies which actually demonstrate this. I tried looking for studies evaluating the effect of testable code vs. quality, however the closest I can find is Improving the Testability of Object Oriented Systems, which discusses the relationship between design flaws and testability. Is testable code is actually more stable? And why, or why not? Please back up your answers with references or evidence to back up your claim.

    Read the article

  • Is it possible to set a claimType to be required and have a certain value in WIF?

    - by Nissan Fan
    <claimTypeRequired> <claimType type="http://www.stackoverflow.com/claims/canwalkthedog" optional="false" /> </claimTypeRequired> Is it possible in WIF apps to setup the web.config to use constraints. E.g. Say that a particular claim is required and must contain a value such as 1 or 'Y'? I want to create a situation where the framework dispermits access to an application if a claim doesn't meet a certain criteria, rather than to code it out implicitly.

    Read the article

  • Boost.MultiIndex: How to make an effective set intersection?

    - by Arman
    Hello, assume that we have a data1 and data2. How can I intersect them with std::set_intersect()? struct pID { int ID; unsigned int IDf;// postition in the file pID(int id,const unsigned int idf):ID(id),IDf(idf){} bool operator<(const pID& p)const { return ID<p.ID;} }; struct ID{}; struct IDf{}; typedef multi_index_container< pID, indexed_by< ordered_unique< tag<IDf>, BOOST_MULTI_INDEX_MEMBER(pID,unsigned int,IDf)>, ordered_non_unique< tag<ID>,BOOST_MULTI_INDEX_MEMBER(pID,int,ID)> > > pID_set; ID_set data1, data2; Load(data1); Load(data2); pID_set::index<ID>::type& L1_ID_index=L1.data.get<ID>(); pID_set::index<ID>::type& L2_ID_index=L2.data.get<ID>(); // How do I use set_intersect? Kind regards, Arman.

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set [email protected] set SMTPServer=SMTPserver set PathToScript=c:\scripts set [email protected] for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

  • Tomcat - virtualhosting - name / ip / port - based

    - by lisak
    Hey, what are the usage scenarios for these kinds of virtual hosting ? Name Based - typical tomcat virtual hosting, one HOME instance with many contexts, each as an individual host IP based / port based - multiple instances of tomcat ( how is it with performance and memory consuption?) running on IP aliases (virtual IPs) for one network adapter, usually behind http apache server that can run name based virtual hostings. Otherwise I can't figure out how would I forward requests in iptables/firewall based on IP address, which is just one. How is IP based virtual hosting done as to Tomcat and multiple instances ? I'd like to hear some usage scenarios from your experience. How are you running your applications. Cause there are applications having it's own modified classloader and they are developed in a way to run alone withing a tomcat instance. Then there are trivial applications which can run within one instance without problems. Many thanks

    Read the article

  • Web-based document merge solution?

    - by rugcutter
    We are looking for a web-based document merge solution. Our application is a web-based project management tool built using Xataface - PHP on Windows IIS + mySQL. We have a function that allows the user to generate a status report in Microsoft Word format based on data in the tool. Currently this function is implemented using LiveDocX. We have a status report template, and LiveDocX performs the merge into the template using data from our project management tool. The main drawback is LiveDocx is web-service based. We are looking to replace LiveDocX in order to reduce our dependence on the up-time of a third-party web-service that we cannot control. Does anyone have any suggestions on a web based document merge solution that I can install on my IIS or PHP based server?

    Read the article

  • How to set the service endPoint URI dynamically in SOA Suite 11gR1 by Sylvain Grosjean’s

    - by JuergenKress
    Use Case : This example demonstrates how to get the URI of the backend service from a repository and how to set it dynamically to our partnerLink (dynamicPartnerLink). Implementation steps : Create a dvm file Create a BPEL component Add the endPointURI variable and assign the uri Set the endpointURI property in the invoke activity 1. Create a DVM file : In order to define our repository, we are going to use DVM (Data Value Maps) : For more explanation regarding DVM, you should read this documentation. 2. Create a BPEL Component : First you need to implement the simple bpel process like this : - The AssignPayload is used to set the inputvariable of our invoke activity. - The AssignEndpointURI is used to dynamically set the endPointURI variable from our DVM repository - The invoke activity to call the external service Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: human task,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,Sylvain Grosjean

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • How do I set up pairing email addresses?

    - by James A. Rosen
    Our team uses the Ruby gem hitch to manage pairing. You set it up with a group email address (e.g. [email protected]) and then tell it who is pairing: $ hitch james tiffany Hitch then sets your Git author configuration so that our commits look like commit 629dbd4739eaa91a720dd432c7a8e6e1a511cb2d Author: James and Tiffany <[email protected]> Date: Thu Oct 31 13:59:05 2013 -0700 Unfortunately, we've only been able to come up with two options: [email protected] doesn't exist. The downside is that if Travis CI tries to notify us that we broke the build, we don't see it. [email protected] does exist and forwards to all the developers. Now the downside is that everyone gets spammed with every broken build by every pair. We have too many possible pair to do any of the following: set up actual [email protected] email addresses or groups (n^2 email addresses) set up forwarding rules for [email protected] (n^2 forwarding rules) set up forwarding rules for [email protected] (n forwarding rules for each of n developers) Does anyone have a system that works for them?

    Read the article

  • Oracle’s New Approach to Cloud-based Applications User Experiences

    - by Oracle OpenWorld Blog Team
    By Misha Vaughan It was an exciting Oracle OpenWorld this year for customers and partners, as they got to see what their input into the Oracle user experience research and development process has produced for cloud-delivered applications. The result of all this engagement and listening is a focus on simplicity, mobility, and extensibility. These were the core themes across Oracle OpenWorld sessions, executive roundtables, and analyst briefings given by Jeremy Ashley, Oracle's vice president of user experience. The highlight of every meeting with a customer featured the new simplified UI for Oracle’s cloud applications.    Attendees at some sessions and events also saw a vision of what is coming next in the Oracle user experience, and they gave direct feedback on whether this would help solve their business problems.  What did attendees think of what they saw this year? Rebecca Wettemann of Nucleus Research was part of  an analyst briefing on next-generation user experiences from Oracle. Here’s what she told CRM Buyer in an interview just after the event:  “Many of the improvements are incremental, which is not surprising, as Oracle regularly updates its application,” Rebecca Wettemann, vice president of Nucleus Research, told CRM Buyer. "Still, there are distinct themes to this latest set of changes. One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually—the intuitive nature of the application and the design work they have done with this goal in mind. The software uses as few buttons and fields as possible," she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."  What else did we hear? Oracle OpenWorld is a time when we can take a broader pulse of our customers’ and partners’ concerns. This year we heard some common user experience themes on the following: · A desire to continue to simplify widely used self-service tasks · A need to understand how customers or partners could take some of the UX lessons learned on simplicity and mobility into their own custom areas and projects  · The continuing challenge of needing to support bring-your-own-device and corporate-provided mobile devices to end users · A desire to harmonize user experiences across platforms for specific business-use cases  What does this mean for next year? Well, there were a lot of things we could only show to smaller groups of customers in our Oracle OpenWorld usability labs and HQ lab tours, to partners at our Expo, and to analysts under non-disclosure agreements. But we used these events as a way to get some early feedback about where we are focusing for the year ahead. Attendees gave us a positive response: @bkhan Saw some excellent UX innovations at the expo “@usableapps: Great job @mishavaughan and @vinoskey on #oow13 UX partner expo!” @WarnerTim @usableapps @mishavaughan @vinoskey @ultan Thanks for an interesting afternoon definitely liked the UX tool kits for partners. You can expect Oracle to continue pushing themes of simplicity, mobility, and extensibility even more aggressively in the next year.  If you are interested to find out what really goes on in the UX labs, such as what we are doing with smartphones, tablets, heads-up displays, and the AppsLab robots, feel free to reach out to me for more information: Misha Vaughan or on Twitter: @mishavaughan.

    Read the article

  • Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun

    - by ETC
    Anyone who says that the centuries old game of Chess cannot be improved upon has obviously never played with a glowing chess board. Today we take a look at a cheap glass chess set modded to glow from within. Instructables user Tetranitrate had a glass chess set he scored on-the-cheap and had always wanted to illuminate it in some way. He ruled out illuminating the board itself (no good way to keep track of the piece colors) and putting a battery in each piece (too big of a pain, over complicates the design). His final solution, the one seen in the photo here, was to build a wood and copper board, run a low voltage across the surface of the chess board, and affix a conductive copper ring to the bottom of each chess piece to power the LED embedded inside. In this manner the pieces would glow on the board and then go dark as soon as they were removed from play. Hit up the link below for additional details on the build and instructions on building your own. LED Chess Set [Instructables] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 1

    - by Your DisplayName here!
    As mentioned in my last post, I made several additions to WIF’s built-in authorization infrastructure to make it more flexible and easy to use. The foundation for all this work is that you have to be able to directly call the registered ClaimsAuthorizationManager. The following snippet is the universal way to get to the WIF configuration that is currently in effect: public static ServiceConfiguration ServiceConfiguration {     get     {         if (OperationContext.Current == null)         {             // no WCF             return FederatedAuthentication.ServiceConfiguration;         }         // search message property         if (OperationContext.Current.IncomingMessageProperties. ContainsKey("ServiceConfiguration"))         {             var configuration = OperationContext.Current. IncomingMessageProperties["ServiceConfiguration"] as ServiceConfiguration;             if (configuration != null)             {                 return configuration;             }         }         // return configuration from configuration file         return new ServiceConfiguration();     } }   From here you can grab ServiceConfiguration.ClaimsAuthoriationManager which give you direct access to the CheckAccess method (and thus control over claim types and values). I then created the following wrapper methods: public static bool CheckAccess(string resource, string action) {     return CheckAccess(resource, action, Thread.CurrentPrincipal as IClaimsPrincipal); } public static bool CheckAccess(string resource, string action, IClaimsPrincipal principal) {     var context = new AuthorizationContext(principal, resource, action);     return AuthorizationManager.CheckAccess(context); } public static bool CheckAccess(Collection<Claim> actions, Collection<Claim> resources) {     return CheckAccess(new AuthorizationContext(         Thread.CurrentPrincipal.AsClaimsPrincipal(), resources, actions)); } public static bool CheckAccess(AuthorizationContext context) {     return AuthorizationManager.CheckAccess(context); } I also created the same set of methods but called DemandAccess. They internally use CheckAccess and will throw a SecurityException when false is returned. All the code is part of Thinktecture.IdentityModel on Codeplex – or via NuGet (Install-Package Thinktecture.IdentityModel).

    Read the article

  • .NET Properties - Use Private Set or ReadOnly Property?

    - by tgxiii
    In what situation should I use a Private Set on a property versus making it a ReadOnly property? Take into consideration the two very simplistic examples below. First example: Public Class Person Private _name As String Public Property Name As String Get Return _name End Get Private Set(ByVal value As String) _name = value End Set End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo Me.Name = txtInfo.ToTitleCase(Me.Name) End Sub End Class // ---------- public class Person { private string _name; public string Name { get { return _name; } private set { _name = value; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; this.Name = txtInfo.ToTitleCase(this.Name); } } Second example: Public Class AnotherPerson Private _name As String Public ReadOnly Property Name As String Get Return _name End Get End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo _name = txtInfo.ToTitleCase(_name) End Sub End Class // --------------- public class AnotherPerson { private string _name; public string Name { get { return _name; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; _name = txtInfo.ToTitleCase(_name); } } They both yield the same results. Is this a situation where there's no right and wrong, and it's just a matter of preference?

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • What are the difference between: agent, actor, dataflow based programming?

    - by inf3rno
    What are the difference between the following terms? agent-based programming agent-based programming with microagents actor-based programming actor-based programming with lightweight actors dataflow based programming It is hard to find comparing articles and they are very similar. Afaik they have different constraints and they are implemented on a different abstraction level, but I need some reassurance...

    Read the article

  • Correct place to set $BIBINPUTS environment variable

    - by student
    If I set the $BIBINPUTS environment varibale in my .zshrc, it is recognized by emacs-reftex (via emacsclient), if I start emacs from my zsh commandline. However if I start using the menubar or gmrun it doesn't knot this variable. So where is the correct place to set for the whole user environment? If there are several alternatives, let me know. Also if it changed between differend ubuntu-versions. Edit: I have tried to set it in ~/.pam_environment like BSTINPUTS=.:/home/myuser/BiBTeX/:$BSTINPUTS BIBINPUTS=.:/home/myuser/BiBTeX/:$BIBINPUTS but it seems to have no effect (even after rebooting) and is not listed via printenv. I am currently using ubuntu natty + gdm + xmonad.

    Read the article

  • Fast set indexing data structure for superset retrieval

    - by Asterios
    I am given a set of sets: {{a,b}, {a,b,c}, {a,c}, {a,c,f}} I would like to have a data structure to index those sets such that the following "lookup" is executed fast: find all supersets of a given set. For example, given the set {a,c} the structure would return {{a,b,c}, {a,c,f}, {a,c}} but not {a,b}. Any suggestions? Could this be done with a smart trie-like data structure storing sets after a proper sorting? This data structures is going to be queried a lot. Thus, I'm searching for a structure that might be expensive in build but rather fast to query.

    Read the article

  • How to set-up a simple subversion workflow

    - by Milen Bilyanov
    I am trying to set-up a simple SVN workflow at home. I am new to subversion (and programming) so I have been reading the official PDF documentations but still not sure about how to set-up my repository. I am working mainly with python, bash and rsl (Renderman Shading Language) So I already have a /dev structure on my disk as this: http://imageshack.us/f/708/devstructure.png/ And I have a /site structure that links to my /dev folder: http://imageshack.us/f/651/sitestructure.png/ So obviously starting to use SVN will change this approach that I already have in place. The question is when I am setting-up my SVN repository for the work I do in my /dev folder: Will I set-up a separate repository for each different programming platform? and Where exactly I should be placing my repository? Thanks.

    Read the article

  • How get and set accessors work

    - by Chris Halcrow
    The standard method of implementing get and set accessors in C# and VB.NET is to use a public property to set and retrieve the value of a corresponding private variable. Am I right in saying that this has no effect of different instances of a variable? By this I mean, if there are different instantiations of an object, then those instances and their properties are completely independent right? So I think my understanding is correct that setting a private variable is just a construct to be able to implement the get and set pattern? Never been 100% sure about this.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >