Search Results

Search found 5454 results on 219 pages for 'soa purge instances dehyd'.

Page 92/219 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Silverlight Cream for February 07, 2011 -- #1043

    - by Dave Campbell
    In this Issue: Roy Dallal, Kevin Dockx, Gill Cleeren, Oren Gal, Colin Eberhardt, Rudi Grobler, Jesse Liberty, Shawn Wildermuth, Kirupa Chinnathambi, Jeremy Likness, Martin Krüger(-2-), Beth Massi, and Michael Crump. Above the Fold: Silverlight: "A Circular ProgressBar Style using an Attached ViewModel" Colin Eberhardt WP7: "Isolated Storage" Jesse Liberty Lightswitch: "How To Create Outlook Appointments from a LightSwitch Application" Beth Massi Shoutouts: Gergely Orosz has a summary of his 4-part series on Styles in Silverlight: Everything a Developer Needs To Know From SilverlightCream.com: Silverlight Memory Leak, Part 2 Roy Dallal has part 2 of his memory leak posts up... and discusses the results of runnin VMMap and some hints on how to make best use of it. Using a Channel Factory in Silverlight (instead of adding a Service Reference). With cows. Kevin Dockx has a post up for those of you that don't like the generated code that comes about when adding a service reference, and the answer is a Channel Factory... and he has an example app in the post that populates a list of cows... honest ... check it out. Getting ready for Microsoft Silverlight Exam 70-506 (Part 4) Gill Cleeren has Part 4 of his deep-dive into studying for the Silverlight Certification exam. This time out he's got probably half a gazillion links for working with data... seriously! Sync unlimited instances of one Silverlight application How about a cross-browser sync of an unlimited number of instances of the same Silverlight app... Oren Gal has just that going on, and discusses his first two attempts and how he finally honed in on the solution. A Circular ProgressBar Style using an Attached ViewModel Wow... check out what Colin Eberhardt's done with the "Progress Bar" ... using an Attached View Model which he discussed in a post a while back... these are awesome! WP7 - Professional Audio Recorder Rudi Grobler discusses an audio recorder for WP7 that uses the NAudio audio library for not only the recording but visualization. Isolated Storage Jesse Liberty's got his 30th 'From Scratch' post up and this time he's talking about Isolated Storage. Learning OData? MSDN and Shawn Wildermuth has the videos for you! Shawn Wildermuth produced a couple series of videos for MSDN on OData: Getting Started and Consuming OData... get the link on Shawn's post. Creating Sample Data from a Class - Page 1 Kirupa Chinnathambi shows us how to use a schema of your own design in Blend... yet still have Blend produce sample data A Pivot-Style Data Grid without the DataGrid Jeremy Likness discusses the lack of an open-source grid with dynamic columns ... let him know if you've done one! ... and then he continues on to demonstrate his build-out of the same. Synchronize a freeform drawing and a real path creation Martin Krüger has a few new samples up in the Expression Gallery. This first is taking mouse movement in an InkPresenter and creating path statements from it in a canvas and playing them back. How to: use Storyboard completed behaviors Martin Krüger's next post is about Storyboards and firing one off the end of another, in Blend... so he ended up producing a behavior for doing that... and it's in the Expression Gallery How To Create Outlook Appointments from a LightSwitch Application Beth Massi has a new Lightswitch post up... her previous was email from Lightswitch... this is Outlook appointments... pretty darn cool. Quick run through of the WP7 Developer Tools January 2011 Michael Crump has a really good Quick look at the new WP7 Dev Tools that were released last week posted on his blog Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • Java Hint in NetBeans for Identifying JOptionPanes

    - by Geertjan
    I tend to have "JOptionPane.showMessageDialogs" scattered through my code, for debugging purposes. Now I have a way to identify all of them and remove them one by one, since some of them are there for users of the application so shouldn't be removed, via the Refactoring window: Identifying instances of code that I'm interested in is really trivial: import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.java.hints.ConstraintVariableType; import org.netbeans.spi.java.hints.ErrorDescriptionFactory; import org.netbeans.spi.java.hints.Hint; import org.netbeans.spi.java.hints.HintContext; import org.netbeans.spi.java.hints.TriggerPattern; import org.openide.util.NbBundle.Messages; @Hint( displayName = "#DN_ShowMessageDialogChecker", description = "#DESC_ShowMessageDialogChecker", category = "general") @Messages({ "DN_ShowMessageDialogChecker=Found \"ShowMessageDialog\"", "DESC_ShowMessageDialogChecker=Checks for JOptionPane.showMes" }) public class ShowMessageDialogChecker { @TriggerPattern(value = "$1.showMessageDialog", constraints = @ConstraintVariableType(variable = "$1", type = "javax.swing.JOptionPane")) @Messages("ERR_ShowMessageDialogChecker=Are you sure you need this statement?") public static ErrorDescription computeWarning(HintContext ctx) { return ErrorDescriptionFactory.forName( ctx, ctx.getPath(), Bundle.ERR_ShowMessageDialogChecker()); } } Stick the above class, which seriously isn't much code at all, in a module and run it, with this result: Bit trickier to do the fix, i.e., add a bit of code to let the user remove the statement, but I looked in the NetBeans sources and used the System.out fix, which does the same thing:  import com.sun.source.tree.BlockTree; import com.sun.source.tree.StatementTree; import com.sun.source.util.TreePath; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.netbeans.api.java.source.CompilationInfo; import org.netbeans.api.java.source.WorkingCopy; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.Fix; import org.netbeans.spi.java.hints.ConstraintVariableType; import org.netbeans.spi.java.hints.ErrorDescriptionFactory; import org.netbeans.spi.java.hints.Hint; import org.netbeans.spi.java.hints.HintContext; import org.netbeans.spi.java.hints.JavaFix; import org.netbeans.spi.java.hints.TriggerPattern; import org.openide.util.NbBundle.Messages; @Hint( displayName = "#DN_ShowMessageDialogChecker", description = "#DESC_ShowMessageDialogChecker", category = "general") @Messages({ "DN_ShowMessageDialogChecker=Found \"ShowMessageDialog\"", "DESC_ShowMessageDialogChecker=Checks for JOptionPane.showMes" }) public class ShowMessageDialogChecker { @TriggerPattern(value = "$1.showMessageDialog", constraints = @ConstraintVariableType(variable = "$1", type = "javax.swing.JOptionPane")) @Messages("ERR_ShowMessageDialogChecker=Are you sure you need this statement?") public static ErrorDescription computeWarning(HintContext ctx) { Fix fix = new FixImpl(ctx.getInfo(), ctx.getPath()).toEditorFix(); return ErrorDescriptionFactory.forName( ctx, ctx.getPath(), Bundle.ERR_ShowMessageDialogChecker(), fix); } private static final class FixImpl extends JavaFix { public FixImpl(CompilationInfo info, TreePath tp) { super(info, tp); } @Override @Messages("FIX_ShowMessageDialogChecker=Remove the statement") protected String getText() { return Bundle.FIX_ShowMessageDialogChecker(); } @Override protected void performRewrite(TransformationContext tc) throws Exception { WorkingCopy wc = tc.getWorkingCopy(); TreePath statementPath = tc.getPath(); TreePath blockPath = tc.getPath().getParentPath(); while (!(blockPath.getLeaf() instanceof BlockTree)) { statementPath = blockPath; blockPath = blockPath.getParentPath(); if (blockPath == null) { return; } } BlockTree blockTree = (BlockTree) blockPath.getLeaf(); List<? extends StatementTree> statements = blockTree.getStatements(); List<StatementTree> newStatements = new ArrayList<StatementTree>(); for (Iterator<? extends StatementTree> it = statements.iterator(); it.hasNext();) { StatementTree statement = it.next(); if (statement != statementPath.getLeaf()) { newStatements.add(statement); } } BlockTree newBlockTree = wc.getTreeMaker().Block(newStatements, blockTree.isStatic()); wc.rewrite(blockTree, newBlockTree); } } } Aside from now being able to use "Inspect & Refactor" to identify and fix all instances of JOptionPane.showMessageDialog at the same time, you can also do the fixes per instance within the editor:

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Rotating WebLogic Server logs to avoid large files using WLST.

    - by adejuanc
    By default, when WebLogic Server instances are started in development mode, the server automatically renames (rotates) its local server log file as SERVER_NAME.log.n.  For the remainder of the server session, log messages accumulate in SERVER_NAME.log until the file grows to a size of 500 kilobytes.Each time the server log file reaches this size, the server renames the log file and creates a new SERVER_NAME.log to store new messages. By default, the rotated log files are numbered in order of creation filenamennnnn, where filename is the name configured for the log file. You can configure a server instance to include a time and date stamp in the file name of rotated log files; for example, server-name-%yyyy%-%mm%-%dd%-%hh%-%mm%.log.By default, when server instances are started in production mode, the server rotates its server log file whenever the file grows to 5000 kilobytes in size. It does not rotate the local server log file when the server is started. For more information about changing the mode in which a server starts, see Change to production mode in the Administration Console Online Help.You can change these default settings for log file rotation. For example, you can change the file size at which the server rotates the log file or you can configure a server to rotate log files based on a time interval. You can also specify the maximum number of rotated files that can accumulate. After the number of log files reaches this number, subsequent file rotations delete the oldest log file and create a new log file with the latest suffix.  Note: WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth. To Rotate via WLST : #invoke WLSTC:\>java weblogic.WLST#connect WLST to an Administration Serverawls:/offline> connect('username','password')#navigate to the ServerRuntime MBean hierarchywls:/mydomain/serverConfig> serverRuntime()wls:/mydomain/serverRuntime>ls()#navigate to the server LogRuntimeMBeanwls:/mydomain/serverRuntime> cd('LogRuntime/myserver')wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()-r-- Name myserver-r-- Type LogRuntime-r-x forceLogRotation java.lang.Void :#force the immediate rotation of the server log filewls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()wls:/mydomain/serverRuntime/LogRuntime/myserver> The server immediately rotates the file and prints the following message: <Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170017> <The log file C:\diablodomain\servers\myserver\logs\myserver.log will be rotated. Reopen the log file if tailing has stopped. This can happen on some platforms like Windows.><Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170018> <The log file has been rotated to C:\diablodomain\servers\myserver\logs\myserver.log00001. Log messages will continue to be logged in C:\diablodomain\servers\myserver\logs\myserver.log.> To specify the Location of the archived Log Files The following command specifies the directory location for the archived log files using the -Dweblogic.log.LogFileRotationDir Java startup option: java -Dweblogic.log.LogFileRotationDir=c:\foo-Dweblogic.management.username=installadministrator-Dweblogic.management.password=installadministrator weblogic.Server For more information read the following documentation ; Using the WebLogic Scripting Tool http://download.oracle.com/docs/cd/E13222_01/wls/docs103/config_scripting/using_WLST.html Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html

    Read the article

  • Strategies for invoking subclass methods on generic objects

    - by Brad Patton
    I've run into this issue in a number of places and have solved it a bunch of different ways but looking for other solutions or opinions on how to address. The scenario is when you have a collection of objects all based off of the same superclass but you want to perform certain actions based only on instances of some of the subclasses. One contrived example of this might be an HTML document made up of elements. You could have a superclass named HTMLELement and subclasses of Headings, Paragraphs, Images, Comments, etc. To invoke a common action across all of the objects you declare a virtual method in the superclass and specific implementations in all of the subclasses. So to render the document you could loop all of the different objects in the document and call a common Render() method on each instance. It's the case where again using the same generic objects in the collection I want to perform different actions for instances of specific subclass (or set of subclasses). For example (an remember this is just an example) when iterating over the collection, elements with external links need to be downloaded (e.g. JS, CSS, images) and some might require additional parsing (JS, CSS). What's the best way to handle those special cases. Some of the strategies I've used or seen used include: Virtual methods in the base class. So in the base class you have a virtual LoadExternalContent() method that does nothing and then override it in the specific subclasses that need to implement it. The benefit being that in the calling code there is no object testing you send the same message to each object and let most of them ignore it. Two downsides that I can think of. First it can make the base class very cluttered with methods that have nothing to do with most of the hierarchy. Second it assumes all of the work can be done in the called method and doesn't handle the case where there might be additional context specific actions in the calling code (i.e. you want to do something in the UI and not the model). Have methods on the class to uniquely identify the objects. This could include methods like ClassName() which return a string with the class name or other return values like enums or booleans (IsImage()). The benefit is that the calling code can use if or switch statements to filter objects to perform class specific actions. The downside is that for every new class you need to implement these methods and can look cluttered. Also performance could be less than some of the other options. Use language features to identify objects. This includes reflection and language operators to identify the objects. For example in C# there is the is operator that returns true if the instance matches the specified class. The benefit is no additional code to implement in your object hierarchy. The only downside seems to be the lack of using something like a switch statement and the fact that your calling code is a little more cluttered. Are there other strategies I am missing? Thoughts on best approaches?

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Box2d: Set active and inactive

    - by Rosarch
    I'm writing an XNA game in C# using the XNA port of Box2d - Box2dx. Entities like trees or zombies are represented as GameObjects. GameObjectManager adds and removes them from the game world: /// <summary> /// Does the work of removing the GameObject. /// </summary> /// <param name="controller">The GameObject to be removed.</param> private void removeGameObjectFromWorld(GameObjectController controller) { controllers.Remove(controller); worldState.Models.Remove(controller.Model); controller.Model.Body.SetActive(false); } public void addGameObjectToWorld(GameObjectController controller) { controllers.Add(controller); worldState.Models.Add(controller.Model); controller.Model.Body.SetActive(true); } controllers is a collection of GameObjectController instances. worldState.Models is a collection of GameObjectModel instances. When I remove GameObjects from Box2d this way, this method gets called: void IContactListener.EndContact(Contact contact) { GameObjectController collider1 = worldQueryUtils.gameObjectOfBody(contact.GetFixtureA().GetBody()); GameObjectController collider2 = worldQueryUtils.gameObjectOfBody(contact.GetFixtureB().GetBody()); collisionRecorder.removeCollision(collider1, collider2); } worldQueryUtils: // this could be cached if we know bodies never change public GameObjectController gameObjectOfBody(Body body) { return worldQueryEngine.GameObjectsForPredicate(x => x.Model.Body == body).Single(); } This method throws an error: System.InvalidOperationException was unhandled Message="Sequence contains no elements" Source="System.Core" StackTrace: at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source) at etc Why is this happening? What can I do to avoid it? This method has been called many times before the body.SetActive() was called. I feel that this may be messing it up.

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • Self hosted WCF ServiceHost/WebServiceHost Concurrency/Peformance Design Options (.NET 3.5)

    - by Kyle
    So I'll be providing a few functions via a self hosted (in a WindowsService) WebServiceHost (not sure how to process HTTP GET/POST with ServiceHost), one of which may be called a large amount of the time. This function will also rely on a connection in the appdomain (hosted by the WindowsService so it can stay alive over multiple requests). I have the following concerns and would be oh so thankful for any input/thoughts/comments: Concurrent access - how does the WebServiceHost handle a bunch of concurrent requests. Are they queued and processes sequentially or are new instances of the contracts automagically created? WebServiceHost - WindowsService communication - I need some form of communication from the WebServiceHost to the hosting WindowsService for things like requesting a new session if one does not exist. Perhaps implementing a class which extends the WebServiceHost with events which the WindowsService subscribes to... (unless there is another way I can set off an event in the WindowsService when a request is made...) Multiple WebServiceHosts or Contracts - Would it give any real performance gain to be running multiple WebServiceHost instances in different threads (one per endpoint perhaps?) - A better understanding of the first point would probably help here. WSDL - I'm not sure why (probably just need to do more reading), but I'm not sure how to get the WebServiceHost base endpoint to respond with a WDSL document describing the available contract. Not required as all the operations will be done via GET requests which will not likely change, but it would be nice to have... That's about it for the moment ;) I've been reading a lot on WCF and wish I'd gotten into it long ago, but definitely still learning.

    Read the article

  • CKEditor instance already exists

    - by jackboberg
    I am using jquery dialogs to present forms (fetched via AJAX). On some forms I am using a CKEditor for the textareas. The editor displays fine on the first load. When the user cancels the dialog, I am removing the contents so that they are loaded fresh on a later request. The issue is, once the dialog is reloaded, the CKEditor claims the editor already exists. uncaught exception: [CKEDITOR.editor] The instance "textarea_name" already exists. The API includes a method for destroying existing editors, and I have seen people claiming this is a solution: if (CKEDITOR.instances['textarea_name']) { CKEDITOR.instances['textarea_name'].destroy(); } CKEDITOR.replace('textarea_name'); This is not working for me, as I receive a new error instead: TypeError: Result of expression 'i.contentWindow' [null] is not an object. This error seems to occur on the "destroy()" rather than the "replace()". Has anyone experienced this and found a different solution? Is is possible to 're-render' the existing editor, rather than destroying and replacing it? UPDATED Here is another question dealing with the same problem, but he has provided a downloadable test case.

    Read the article

  • C# Lack of Static Inheritance - What Should I Do?

    - by yellowblood
    Alright, so as you probably know, static inheritance is impossible in C#. I understand that, however I'm stuck with the development of my program. I will try to make it as simple as possible. Lets say our code needs to manage objects that are presenting aircrafts in some airport. The requirements are as follows: There are members and methods that are shared for all aircrafts There are many types of aircrafts, each type may have its own extra methods and members. There can be many instances for each aircraft type. Every aircraft type must have a friendly name for this type, and more details about this type. For example a class named F16 will have a static member FriendlyName with the value of "Lockheed Martin F-16 Fighting Falcon". Other programmers should be able to add more aircrafts, although they must be enforced to create the same static details about the types of the aircrafts. In some GUI, there should be a way to let the user see the list of available types (with the details such as FriendlyName) and add or remove instances of the aircrafts, saved, lets say, to some XML file. So, basically, if I could enforce inherited classes to implement static members and methods, I would enforce the aircraft types to have static members such as FriendlyName. Sadly I cannot do that. So, what would be the best design for this scenario?

    Read the article

  • Help using mod_jk to forward to backend app server

    - by ravun
    I had mod-jk working a while ago but after switching servers and modifying some files, it no longer works. I am using mod_jk-1.2.28 with JBoss 4.2.3 as the backend. In the JBoss server.xml file I have the AJP 1.3 connector defined on port 8009 and I am binding jboss to the server's new IP address. The app I am trying to forward to is deployed as: [TomcatDeployer] deploy, ctxPath=/ManualAlerts, warUrl=.../tmp/deploy/tmp8097651929280250028ManualAlertsApp.ear-contents/ManualAlerts-exp.war/ On the web server, I have worker.properties with a worker set for the JBoss address and port 8009. The mod-jk.conf has JkMount /ManualAlerts/* worker1. Shouldn't this forward all requests to the web server with the URL http://address/ManualAlerts/ to the backend app named ManualAlerts? The mod-jk.log shows: [Sat Oct 31 14:19:28 2009][30709:3086014224] [error] ajp_send_request::jk_ajp_common.c (1507): (worker1) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=115) [Sat Oct 31 14:19:28 2009][30709:3086014224] [info] ajp_service::jk_ajp_common.c (2447): (worker1) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Sat Oct 31 14:19:28 2009][30709:3086014224] [error] ajp_service::jk_ajp_common.c (2466): (worker1) connecting to tomcat failed. [Sat Oct 31 14:19:28 2009][30709:3086014224] [info] service::jk_lb_worker.c (1384): service failed, worker worker1 is in error state [Sat Oct 31 14:19:28 2009][30709:3086014224] [info] service::jk_lb_worker.c (1464): All tomcat instances are busy or in error state [Sat Oct 31 14:19:28 2009][30709:3086014224] [error] service::jk_lb_worker.c (1469): All tomcat instances failed, no more workers left Running netstat -an on the app server shows jboss listening on 8009 and the local address is the app server's address. In the mod-jk.log it shows connect to (XXX.XXX.XXX.XXX:8009) failed, and the app-server address is correct here, too. I cannot figure out what's causing the issue.

    Read the article

  • Using Enterprise Library DAAB in C# Console Project

    - by Kayes
    How can I prepare the configuration settings (App.config, maybe?) I need to use Enterprise Library Data Access Application Block in a C# console project? Following is what I currently trying with an App.config in the console project. When I call DatabaseFactory.CreateDatabase(), it throws an exception which says: "Configuration system failed to initialize" <configuration> <dataConfiguration> <xmlSerializerSection type="Microsoft.Practices.EnterpriseLibrary.Data. Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <enterpriseLibrary.databaseSettings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" defaultInstance="Northwind" xmlns="http://www.microsoft.com/practices/enterpriselibrary/08-31-2004/data"> <databaseTypes> <databaseType name="Sql Server" type="Microsoft.Practices.EnterpriseLibrary.Data.Sql.SqlDatabase, Microsoft.Practices.EnterpriseLibrary.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </databaseTypes> <instances> <instance name="Northwind" type="Sql Server" connectionString="Northwind" /> </instances> <connectionStrings> <connectionString name="Northwind"> <parameters> <parameter name="Database" value="Northwind" isSensitive="false" /> <parameter name="Integrated Security" value="True" isSensitive="false" /> <parameter name="Server" value="local" isSensitive="false" /> <parameter name="User ID" value="sa" isSensitive="false" /> <parameter name="Password" value="sa1234" isSensitive="true" /> </parameters> </connectionString> </connectionStrings> </enterpriseLibrary.databaseSettings> </xmlSerializerSection> </dataConfiguration> </configuration>

    Read the article

  • Axis2 webservice (aar archive) properties file

    - by XpiritO
    Hi there, guys. I'm currently developing a set of SOAP webservices over Axis2, deployed over a clustered WebLogic 10.3.2 environment. My webservices use some user settings that I want to be editable without the need for recompiling and regenerating the AAR archive. With this in mind, I chose to put them into a properties file that is loaded and consumed in runtime. Unfortunately, I'm having some questions about this: As far as I know, to achieve what I want, the only option is to put the properties file into the ../axis2/WEB-INF/classes directory of each one of the deployments (on each WebLogic instance) I currently have on my clustered configuration, and then load the file, as follows (or equivalent, this has not been verified for optimization): InputStreamReader fMainProp = new InputStreamReader(this.getClass().getResourceAsStream("myfile.properties")); Properties mainProp = new Properties(); mainProp.load(fMainProp); This is not as practical as I wanted it to be, because each time I want to alter some setting on the properties file, I have to edit each one of the files (deployed over different WebLogic instances) and there is a high probability of modifying one of these files without modifying the others. What I would like to know is if there is any (better) alternative to accomplish what I want, minimizing the potential conflict of configuration that is created by distributing and replicating the properties file through multiple WebLogic instances.

    Read the article

  • Any way for a class to prevent outside code from declaring variables of its type?

    - by supercat
    Is it possible for a class of exposing a type for function returns, without allowing users of that class to create variables of that type? A couple usage scenarios: A Fluent interface on a large class; a statement like "foo=bar.WithX(5).WithY(9).WithZ(19);" would be inefficient if it had to create three new instances of the class, but could be much more efficient if the WithX could create one instance, and the other statements could simply use it. A class may wish to support a statement like "foo[19].x = 9;" even when foo itself isn't an array, and does not hold the data in class instances that can be exposed to the public; one way to do that is to have foo[19] return a struct which holds a reference to 'foo' and the value '19', and has a member property 'x' which could call "foo.SetXValue(19, 9);" Such a struct could have a conversion operator to convert itself to the "apparent" type of foo[19]. In both of these scenarios, storing the value returned by a method or property into a variable and then using it more than once would cause strange behavior. It would be desirable if the designer of the class exposing such methods or properties could ensure that callers wouldn't be able to use them more than once. Is there any practical way to accomplish that?

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • How does UIImage work in low-memory situations?

    - by kubi
    According to the UIImage documentation: In low-memory situations, image data may be purged from a UIImage object to free up memory on the system. Does anyone know how this works? It appears that this process is completely transparent and will occur in the background with no input from me, but I can't find any definitive documentation one way or the other. Second, will this data-purge occur when the image is not loaded by me? (I'm getting the image from UIImagePicker). Here's the situation: I'm taking a picture with the UIImagePickerController and and immediately taking that image and sending it to a new UIViewController for display. Sending the raw image to the new controller crashes my app with memory warnings about 30% of the time. Resizing the image takes a few moments, time that I'd rather not spend if there's a 3rd option available to me.

    Read the article

  • Factories, or Dependency Injection for object instantiation in WCF, when coding against an interface

    - by Saajid Ismail
    Hi I am writing a client/server application, where the client is a Windows Forms app, and the server is a WCF service hosted in a Windows Service. Note that I control both sides of the application. I am trying to implement the practice of coding against an interface: i.e. I have a Shared assembly which is referenced by the client application. This project contains my WCF ServiceContracts and interfaces which will be exposed to clients. I am trying to only expose interfaces to the clients, so that they are only dependant on a contract, not any specific implementation. One of the reasons for doing this is so that I can have my service implementation, and domain change at any time without having to recompile and redeploy the clients. The interfaces/contracts will in this case not change. I only need to recompile and redeploy my WCF service. The design issue I am facing now, is: on the client, how do I create new instances of objects, e.g. ICustomer, if the client doesn't know about the Customer concrete implementation? I need to create a new customer to be saved to the DB. Do I use dependency injection, or a Factory class to instantiate new objects, or should I just allow the client to create new instances of concrete implementations? I am not doing TDD, and I will typically only have one implementation of ICustomer or any other exposed interface.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >