Search Results

Search found 46957 results on 1879 pages for 'hello world bpel process'.

Page 28/1879 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How to set a low process priority for everything spawned from a command prompt in XP?

    - by Binary Worrier
    As a developer, once or twice a week I run a full build on my XP dev machine. This will run at 100% cpu for 30 or 40 minutes, making my machine usless for anything other than basic browsing & email. Is there anyway I can specify that for a given process (i.e. a command prompt) it and any process spawned by it will hae a lower priority, say taking up no more than 60 - 70% of CPU, leaving my machine more usable. I don't mind the build talking 30 or 40% longer, if I still have use of my machine while it's running. Thanks BW P.S. I'd love to be able to throw more hardware at the problem, but that isn't under my control.

    Read the article

  • Total network data sent/received of a non-daemon Linux process?

    - by leden
    I'm looking for a simple and effective way of measuring total bytes received/sent from a single process upon its termination. Basically, I am looking for a tool which has the interface similar to "time" and "/usr/bin/time", e.g. measure-net-data <prog_to_run> <prog_args> Received (b): XYZ Sent (b): ABC I know that there are many tools for bandwidth/network monitoring, but as far I can tell all of them are performing the measurements it real-time, which is inappropriate not only because of overhead but also because of the inconvenience - I would need to stop the program, capture the output of the tool and then kill it. I have seen that newer versions of Linux 2.6.20+ provide /proc/<pid>/io/ which contain the information I'm looking for; however, everything under /proc/<pid> when the process terminates, so I'm again back to the same problem as with any network monitoring tool.

    Read the article

  • Hello world/Console Project in Visual Studio 2008 64 bit

    - by grobartn
    So I am trying to run console 64 bit Hello World program. I have Windows 7 Enterprise x64 bit version. I have installed Visual Studio 2008 and have added all of components needed for 64 bit. I want to create simple console application. It turns out to be a problem. I have simple standard hello world project. I have created it using New Project - Empty project. I added main.cpp that contains this: #include <iostream> using namespace std; int main() { cout << "howdy\n"; } I added new configuration to the project by clicking on Config Manager and added x64 config. Compiled and it compiles. Tried running it and cmd.exe shoots up with following error: "The application has failed to start because its side-by-side configuration is in correct. Please see the application event log or use the command-line sxstrace.e xe tool for more detail. Press any key to continue . . . " Which set-up step if any I am missing. What am I doing wrong and how should I go about setting simple console hello world in 64 bit world. Thanks for any help

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • A training world nugget for being taught by the best

    - by Testas
    June represents an exciting time for the SQL Server community with events all over the country in the next few months and there is plenty of knowledge to be gained from willing speakers enthusiastically sharing their knowledge. Furthermore, Paul Randall and Kimberley Trip will be conducting their highly recommended immersion events at London Heathrow in June.There are other big names within SQL Server that will be teaching this year. The company I used to work for, QA, has excellent trainers teaching SQL Server who I would always recommend. Occasionally a big name speaker will be take a course, unknowingly to the community. Solid Quality Mentors is such a company where their staff will teach at QA offices from time to time. And I know from conversation with Itzik Ben-Gan that he will be teaching Advanced TSQL within QA offices in London during the week of Oct 3-7. A link to the course details can be found here.http://www.qa.com/training-courses/technical-it-training/microsoft/microsoft-sql-server/microsoft-sql-server-2008-and-r2/advanced-t-sql-querying,-programming-and-tuning-for-sql-server-2005--2008So if you want to be taught by the best experts, consider checking www.QA.com for their advanced SQL courses, you could find yourself being taught by the best in the business in their field.Chris  

    Read the article

  • SSH server not working (respawns until stopped)

    - by Khaled
    I have a running Ubuntu Server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is Ubuntu Server and it should be running and accessed remotely using SSH.

    Read the article

  • Potentially The World’s Filthiest PC [Video]

    - by Jason Fitzpatrick
    We’re confident we’ve seen some dusty PC cases in our day, but nothing we’ve ever cleaned produced the sheer volume of smoke-bomb like dust this neglected tower spews out. That noise you hear, about 1:15 into the video, is the sound of the compressor motor kicking back on to top off the pressure tank: behold, a PC so filthy the compressor cleaning it out needs to take a break! [via Geeks Are Sexy] HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • World Class Training For Them, an Amazon Gift Certificate For You

    - by Adam Machanic
    We have just two weeks to go before Paul Randal and Kimberly Tripp touch down in the Boston area to deliver their famous SQL Server Immersions course . This is going to be a truly fantastic SQL Server learning experience and we're hoping a few more people will join in the fun. This is where you come in: we have a few vacant seats remaining and we need your help spreading the word. Simply tell your friends and colleagues about the course and e-mail me (adam [at] bostonsqltraining [dot] com) the names...(read more)

    Read the article

  • Podcast Show Notes: Architecture in a Post-SOA World

    - by Bob Rhubart
    All three segments of my conversation with Oracle ACE Director Hajo Normann, SOA author Jeff Davies, and enterprise architect Pat Shepherd are now available. This conversation was recorded on March 9, 2010, and covered a lot of territory, from the lingering fear of SOA among many in IT, to the misinformation behind that fear, to a discussion of the future of enterprise architecture. Listen to Part 1 Listen to Part 2 Listen to Part 3 If you’d like to engage any of the panelists in your own conversation, the links below will help: Hajo Normann is a SOA architect and consultant at EDS in Frankfurt Blog | LinkedIn | Oracle Mix | Oracle ACE Profile | Books Jeff Davies is a Senior Product Manager at Oracle, and is the primary author of The Definitive Guide to SOA: Oracle Service Bus Homepage | Blog | LinkedIn | Oracle Mix Pat Shepherd is an enterprise architect with the Oracle Enterprise Solutions Group. Oracle Mix | LinkedIn | Blog New panelists and new topics coming next week, so stay tuned: RSS   Technorati Tags: oracle,otn,arch2arch,architect,communiity,enterprise architecture,podcast,soa,service-oriented architecture del.icio.us Tags: oracle,otn,arch2arch,architect,communiity,enterprise architecture,podcast,soa,service-oriented architecture

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • Nokia vs. The World

    - by Michael B. McLaughlin
    I’m looking forward to the launch of the Nokia Lumia 920. Why? Well, it stacks up better than the competition for one thing. Then there’s also that security problem that certain other phones have. Mostly, though, it’s because I love my Lumia 900 and the 920, with Windows Phone 8, will be even better. Before I got my Lumia 900, I just took it as given that smart phone cameras couldn’t be good. The Lumia taught me that smart phone cameras can be good if the manufacturer treats them as an important component worth spending time and money on (rather than some thing that consumers expect such that they’d better throw one in). I’m extremely pleased with the quality of pictures that my Lumia 900 gives me as well as the range of settings it provides (you can delve in to tell it a film speed, an f-stop, and a whole range of other settings). And the image stabilization features in the Lumia 920 deliver far better results than the others. Nokia has had great maps for a long time and they continue to improve. Even better, they made a deal that puts many of their excellent maps into Windows Phone 8 itself. There are still Nokia-exclusive features such as Nokia City Lens, of course. But by giving the core OS a great set of fundamental map data and technologies, they help ensure that customers know that buying a Windows Phone 8 will give them a great map experience no matter who made the phone. I’ll be getting a 920, myself, but the HTC and Samsung devices that have been announced have some compelling features, too, and it’s great to know that people who buy one of these won’t need to worry about where their maps might lead them. I’m looking forward to the NFC capabilities and Qi wireless charging my Lumia 920 will have. With the availability of DirectX and C++ programming on Windows Phone 8, I’m also excited about all the great games that will be added to the Windows Phone environment. I love my Xbox Phone. I love my Office phone. I love my Facebook phone. I love my GPS phone. I love my camera phone. I love my SkyDrive phone. In short, I love my Windows Phone!

    Read the article

  • Open World 2012

    - by jeffrey.waterman
    For those of you fortunate enough to be attending this year's Oracle OpenWorld here is a sessions I recommend carving time out of your hectic schedule to attend: Public Sector General Session (session ID#: GEN8536) Wednesday, October 3, 10:15 a.m.–11:15 a.m., Westin San Francisco, Metropolitan III Room Speakers, Mark Johnson, SVP Oracle Public Sector; Peter Doolan, CTO Oracle Public Sector; Robert Livingston, founding partner of Livingston Group and former member of the US Congress. Join Mark Johnson for an update on Oracle in government. Mark will be joined by Peter Doolan and Robert Livingston to discuss current topics facing governments and how Oracle can help organizations achieve their goals. I'll be posting more interesting sessions as I peruse the conference agenda over the next week or so.  If you see an interesting session, please feel free to share your suggestions in the comments section.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • Java Slick2d - How to translate mouse coordinates to world coordinates

    - by Corey
    I am translating in my main class render. How do I get the mouse position where my mouse actually is after I scroll the screen public void render(GameContainer gc, Graphics g) throws SlickException { float centerX = 800/2; float centerY = 600/2; g.translate(centerX, centerY); g.translate(-player.playerX, -player.playerY); gen.render(g); player.render(g); } playerX = 800 /2 - sprite.getWidth(); playerY = 600 /2 - sprite.getHeight(); Image to help with explanation I tried implementing a camera but it seems no matter what I can't get the mouse position. I was told to do this worldX = mouseX + camX; but it didn't work the mouse was still off. Here is my Camera class if that helps: public class Camera { public float camX; public float camY; Player player; public void init() { player = new Player(); } public void update(GameContainer gc, int delta) { Input input = gc.getInput(); if(input.isKeyDown(Input.KEY_W)) { camY -= player.speed * delta; } if(input.isKeyDown(Input.KEY_S)) { camY += player.speed * delta; } if(input.isKeyDown(Input.KEY_A)) { camX -= player.speed * delta; } if(input.isKeyDown(Input.KEY_D)) { camX += player.speed * delta; } } Code used to convert mouse worldX = (int) (mouseX + cam.camX); worldY = (int) (mouseY + cam.camY);

    Read the article

  • Role of "Refactoring" in good programming pratices?

    - by Niranjan Kala
    I have learned in Agile Development that: Refactoring is the process of clarifying and simplifying the design of existing code, without changing its behavior. I have heard about some GUI refactoring tools like resharper and DevExpress Refactor Pro! Here is my Questions: Question 1: how does it takes place in the Software development process and How far it effects the system? Question 2: Is Refactoring using these tools really fast the process of development/ maintenance?

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >