Search Results

Search found 5597 results on 224 pages for 'worker processes'.

Page 24/224 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Finding records when using has_many through associations

    - by winter sun
    I have two models, Worker and Project, and they are connected with has_many through association. I manage to find all the projects which are related to a specific worker by writing the following code: worker=Worker.find_by_id("some_id") worker.projects but I want the projects that I get to be only active projects (in the project model I have a status field) I tried to do something like worker.projects(:status_id=>'active') but it didn’t work for me. Can somebody tell me how I can do this?

    Read the article

  • Starting a second command while the first is still running from a batch file

    - by ids
    I am trying to create a batch file that will run a utility that opens a connection to a device on the internet then starts a telnet session. I am at the point where once the connection opens the batch file doesn't proceed. Batch file @echo off set /p serial=What are the last 8 of the STB's serial? udp.exe -c 127.0.0.1 23 %serial% x.x.x.x 11111 127.0.0.1 23 | telnet 127.0.0.1 Output What are the last 8 of the STB's serial? XXXXXXX Ready for XXXXXXX It just sits at the 'ready ...' line and never opens the telnet connection. I have tried various | and & but no luck.

    Read the article

  • List with items returns empty

    - by Power-Mosfet
    I have created a simple List function but if I Loop through the List it's empty. It should not be! All, thank you for the input. problem solved // List function public class process_hook { public static List<string> pro_hook = list_all_processes(); protected static List<string> list_all_processes() { var list = new List<string>(); foreach (Process i in Process.GetProcesses(".")) { try { foreach (ProcessModule pm in i.Modules) { list.Add(pm.FileName); } } catch { } } return list; } } // call private void button1_Click(object sender, EventArgs e) { foreach (String _list in process_hook.pro_hook) { Console.WriteLine(_list); } }

    Read the article

  • C++ TerminateProcess function

    - by jemper
    I've been searching examples for the Win32 API C++ function TerminateProcess() but couldn't find any. I'm not that familiar with the Win32 API in general and so I wanted to ask if someone here who is better in it than me could show me an example for, Retrieving a process handle by its PID required to terminate it and then call TerminateProcess with it. If you aren't familiar with C++ a C# equivalent would help too.

    Read the article

  • Nagios state transition and event handler issue

    - by Dattatray
    We are using Nagios to check duplicate processes. define service { use local-service host_name xxx service_description xxx Duplicate Processes check_interval 1 max_check_attempts 1 contact_groups admins event_handler restart-dependent-processes check_command check_procs_duplicate!2!3!2!2!2 } check_procs_duplicate checks if there are any duplicate processes and returns the state - e.g. CRITICAL. The event handler kills the duplicate processes and it's dependent processes and starts one instance of the process and dependent process. At the end of this again Nagios checks if there are any duplicate processes and sets the state accordingly - OK/WARNING/CRITICAL. The event handler takes more time to start the processes and during this time if someone manually starts the process, the state will remain in CRITICAL itself. During the next interval, Nagios will again check for duplicate processes and it will find it again CRITICAL. The event handler will not get executed now, as the previos and current both the states are CRITICAL. Any pointers about how to fix this issue?

    Read the article

  • Apache httpd Problem

    - by Christopher
    Hey, I am getting intermittent issues with my site. Pages often hang with huge loading times and sometimes fail to load. The httpd error logs contain the following: [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5871 for worker proxy:reverse [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5871 for (*) [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5872 for worker proxy:reverse [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5872 for (*) [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5954 for worker proxy:reverse [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized The server is currently running with 800mb free memory, so it is not caused by lack of RAM. Any suggestions would be greatly appreciated. Many thanks, Chris. EDIT The current number of httpd procceses is 11. This does increase as the error persists and can rise up to 25+. I am running Apache/2.2.3 (CentOS).

    Read the article

  • Can mod_fcgid maintain a hard-minimum number of available appserver processes?

    - by user9795
    ...and if so, how? I'm using Apache2 + mod_fcgid to serve a perl Catalyst application, on a box that I own, and I'd like for mod_fcgid to maintain a minimum number of spun-up processes ready to go. The docs say that FcgidMinProcessesPerClass only enforces a minimum number of processes that will be retained in a process class after finishing requests How do I get apache to start up with a certain number of appserver subprocesses on an idle server without using artificial load to get there?

    Read the article

  • PostgreSQL 8.4 won't start after blackout

    - by RiZe
    I have problem with starting PostgreSQL 8.4 on Ubuntu 9.10 Server after blackout. When I try to connect to the database it says: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. When I try to start it by using command sudo -u postgres /etc/init.d/postgresql-8.4 start * Starting PostgreSQL 8.4 database server [ OK ] Netstat output netstat -tulp (No info could be read for "-p": geteuid()=1000 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:postgresql *:* LISTEN - tcp 0 0 192.168.1.35:svn *:* LISTEN - tcp 0 0 192.168.1.35:http-alt *:* LISTEN - tcp 0 0 *:ssh *:* LISTEN - tcp6 0 0 localhost:postgresql [::]:* LISTEN - tcp6 0 0 [::]:ssh [::]:* LISTEN - udp 0 0 *:bootpc *:* - But still don't work so lets restart it sudo -u postgres /etc/init.d/postgresql-8.4 restart * Restarting PostgreSQL 8.4 database server * The PostgreSQL server failed to start. Please check the log output: 2009-11-30 13:39:37 CET LOG: database system was shut down at 2009-11-30 13:39:33 CET 2009-11-30 13:39:37 CET LOG: autovacuum launcher started 2009-11-30 13:39:37 CET LOG: database system is ready to accept connections 2009-11-30 13:39:37 CET LOG: incomplete startup packet 2009-11-30 13:39:38 CET LOG: server process (PID 2240) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:37 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D464C 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:38 CET LOG: autovacuum launcher started 2009-11-30 13:39:38 CET LOG: database system is ready to accept connections 2009-11-30 13:39:38 CET LOG: server process (PID 2248) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D4690 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2256) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:39 CET LOG: record with zero length at 0/11D46D4 2009-11-30 13:39:39 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2264) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:39 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D4718 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:40 CET LOG: server process (PID 2272) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:40 CET LOG: terminating any other active server processes 2009-11-30 13:39:40 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:40 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:40 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D475C 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2280) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47A0 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2288) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47E4 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2296) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D4828 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:42 CET LOG: autovacuum launcher started 2009-11-30 13:39:42 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2304) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D486C 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections 2009-11-30 13:39:43 CET LOG: server process (PID 2312) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:43 CET LOG: terminating any other active server processes 2009-11-30 13:39:43 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:43 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:43 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:43 CET LOG: record with zero length at 0/11D48B0 2009-11-30 13:39:43 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections [fail] So what happened and what can I do to solve this? Thanks for replies

    Read the article

  • Worker processes not starting in IIS 7.5. What should I check?

    - by locster
    I have a Windows 7 machine (Windows version 6.1.7601 SP1 Build 7601) with IIS installed. At some point the installation appears to have become 'corrupted' in some way, as any requests are now met with the message: Service Unavailable HTTP Error 503. The service is unavailable. In IIS manager IIS is started and the app pool I am using reports itself as 'Started', yet there is no w3wp.exe process listed in the process list in task manager (I am a local admin and have clicked the 'Show processes from all users' button. I have enabled logging for the web site (at default location of %SystemDrive%\inetpub\logs\LogFiles), but this folder is empty. I am assuming that this log output is written by w3wp.exe as it handles requests (no w3wp.exe, no log file?). Presumably there is another layer of request handling that is responsible for starting the worker processes, does thsi layer have log files I can check, and/or can I uninstall/re-install that layer? Thanks.

    Read the article

  • How to disable an "always there" program if it isn't in the processes list?

    - by rumtscho
    I have Crashplan and it is constantly running in the background and making backups every 15 minutes. It caused some problems with the backup target folders, so I want it to be inactive while I am making changes to these folders. I started the application itself, but could not find some kind of "Pause" button. So I decided to just stop its process. I first tried the lazy way - the system monitor in the Gnome panel has a "Processes" tab - but didn't find it listed there. Then I did a sudo ps -A and read through the whole list. I don't recognize everything on the list (many process names are self-explaining, like evolution-alarm, but I don't recognize others like phy0) but there was nothing which sounded even remotely like crashplan. But I know that there must have been a process belonging to Crashplan running at this time, because the main Crashplan window was open when I ran the command. Do you have any advice how to stop this thing from running? The best solution would involve temporary preventing it from loading on boot too, since I may need to reboot while doing the maintenance there.

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • How can I find out which processes on my computer are accessing the microphone?

    - by Vinayak
    After reading this interesting Lifehacker post and reading the comments on the page, one person was wondering if it would be possible to use the Physical Device Object Names of other hardware such as the microphone to find out the names of processes using that device. I tried the same approach, but so far it only seems to work for the webcam. Is there any other way I could get this to work in Process Explorer? UPDATE: The Lifehacker post was about finding out which Windows process is currently using your webcam. This is how they went about doing it: Start Device Manager (WIN+R → "devmgmt.msc" → OK) Find your webcam among the list of devices (check under Imaging Devices) Open the properties window of the device and switch to the Details tab (Right click → Properties → Details) In the dropdown menu, select Physical Device Object Name and copy the string(Right click → Copy) Download Process Explorer Make sure you have opened Process Explorer in Administrator Mode(File → Show Details for All Processes) Hit CTRL+F and enter the string you copied earlier(it should be something like \Device\000000XX) Hit the Search button and you should see a list of processes using the webcam(if there are any)

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Unable to load type error in ASP.NET 4 application on Windows Server 2003 / IIS6 -- only happens after first worker process recycle

    - by Daniel Coffman
    I'm running an ASP.NET 4.0 web application on IIS6 (Windows Server 2003 x64). This app is one of many running on this server under the Default Web Site -- but is alone on it's own application pool because the other sites are all running ASP.NET 2.0 still. When I deploy my application, it works just fine until the application pool recycles or kills its worker process (by default 2 hours or 20 minutes with no activity). After this, I get the error: "Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information." Refreshing the page, recycling the application pool, and iisreset do nothing. But, I can bring the site back online again for a little while by simply redeploying it. The stack trace seems to start at an EntityDataSource -- see below: [ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.] System.Reflection.RuntimeModule.GetTypes(RuntimeModule module) +0 System.Reflection.Assembly.GetTypes() +144 System.Data.Metadata.Edm.ObjectItemConventionAssemblyLoader.LoadTypesFromAssembly() +45 System.Data.Metadata.Edm.ObjectItemAssemblyLoader.Load() +34 System.Data.Metadata.Edm.AssemblyCache.LoadAssembly(Assembly assembly, Boolean loadReferencedAssemblies, ObjectItemLoadingSessionData loadingData) +130 System.Data.Metadata.Edm.AssemblyCache.LoadAssembly(Assembly assembly, Boolean loadReferencedAssemblies, KnownAssembliesSet knownAssemblies, EdmItemCollection edmItemCollection, Action`1 logLoadMessage, Object& loaderCookie, Dictionary`2& typesInLoading, List`1& errors) +248 System.Data.Metadata.Edm.ObjectItemCollection.LoadAssemblyFromCache(ObjectItemCollection objectItemCollection, Assembly assembly, Boolean loadReferencedAssemblies, EdmItemCollection edmItemCollection, Action`1 logLoadMessage) +580 System.Data.Metadata.Edm.ObjectItemCollection.ExplicitLoadFromAssembly(Assembly assembly, EdmItemCollection edmItemCollection, Action`1 logLoadMessage) +193 System.Data.Metadata.Edm.MetadataWorkspace.ExplicitLoadFromAssembly(Assembly assembly, ObjectItemCollection collection, Action`1 logLoadMessage) +140 System.Web.UI.WebControls.EntityDataSourceView.ConstructContext() +756 System.Web.UI.WebControls.EntityDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +147 This is a bug filed for the same (or similar) problem: http://connect.microsoft.com/VisualStudio/feedback/details/541962/unable-to-load-one-or-more-of-the-requested-types-connected-with-entitydatasource Question: Has anyone seen this and have advice? I've tried copy-local on all the references... Works just fine on my dev machine. Works on the server until the application pool worker process recycles. I'm building in release mode, but experience the same result when I build for debug. I'm stumped.

    Read the article

  • Java: How to manage UDP client-server state

    - by user92947
    I am trying to write a Java application that works similar to MapReduce. There is a server and several workers. Workers may come and go as they please and the membership to the group has a soft-state. To become a part of the group, the worker must send a UDP datagram to the server, but to continue to be part of the group, the worker must send the UDP datagram to the server every 5 minutes. In order to accommodate temporary errors, a worker is allowed to miss as many as two consecutive periodic UDP datagrams. So, the server must keep track of the current set of workers as well as the last time each worker had sent a UDP datagram. I've implemented a class called WorkerListener that implements Runnable and listens to UDP datagrams on a particular UDP port. Now, to keep track of active workers, this class may maintain a HashSet (or HashMap). When a datagram is received, the server may query the HashSet to check if it is a new member. If so, it can add the new worker to the group by adding an entry into the HashSet. If not, it must reset a "timer" for the worker, noting that it has just heard from the corresponding worker. I'm using the word timer in a generic sense. It doesn't have to be a clock of sorts. Perhaps this could also be implemented using int or long variables. Also, the server must run a thread that continuously monitors the timers for the workers to see that a client that times out on two consecutive datagram intervals, it is removed from the HashSet. I don't want to do this in the WorkerListener thread because it would be blocking on the UDP datagram receive() function. If I create a separate thread to monitor the worker HashSet, it would need to be a different class, perhaps WorkerRegistrar. I must share the HashSet with that thread. Mutual exclusion must also be implemented, then. My question is, what is the best way to do this? Pointers to some sample implementation would be great. I want to use the barebones JDK implementation, and not some fancy state maintenance API that takes care of everything, because I want this to be a useful demonstration for a class that I am teaching. Thanks

    Read the article

  • How do I keep co-worker from writing horrible code? [closed]

    - by Drew H
    Possible Duplicate: How do I approach a coworker about his or her code quality? I can handle the for in.. without the hasOwnProperty filtering. I can handle the blatant disregard for the libraries I've used in the past and just using something else. I can even handle the functions with 25 parameters. But I can't handle this. var trips = new Array(); var flights = new Array(); var passengers = new Array(); var persons = new Array(); var requests = new Array(); I've submitted documents on code style, had code reviews, gave him Douglas Crockford's book, shown him presentations, other peoples githubs, etc. He still show the same horrible Javascript style. How else could I approach this guy? Thanks for any help.

    Read the article

  • When a co-worker asks you to teach him what you know, do you share the information or keep it to yourself? [closed]

    - by Chuck
    I am the only developer/DBA in a small IT department. There is another guy who can do it, but he's more of a backup as he spends his time working on IT support stuff. Anyway we have a new hire and I've been training him on the IT support side of things. Seems like he is eager to learn and be productive, but nobody is going out of their way to show him anything. He's been asking me to teach him database design, SQL, etc. For some reason, the boss has him working with me. He is also sending him to meetings that I go to, yet he hasn't said outright that I have to teach him anything. Meanwhile, the boss insists on doing a lot of the support work himself (i.e. he hoards information and doesn't delegate to anyone). I'm a little bit on the fence. First, the new guy doesn't yet have a strong foundation on the IT support functions which is where we really need help at this time. Second, I paid thousands of dollars for classes and spent many hours learning this stuff. Is it my responsibility to teach others skills that I had to learn on my own? Others here really aren't quick to share information so I'm not sure that I should either in this environment. I do know that if I get him involved, and get him started on projects, then I'd be responsible for his mistakes. I had to take the heat for the other guy when he made mistakes. OTOH the guy wants to learn something, is motivated, and I don't want to stop him. We've had our share of slackers in the group and it's nice to have someone who is willing to work for a change. So what would you guys do? Would you teach him the skills that you spent all of that time learning? Set him up with a test database on his PC and recommend some books for him? Encourage him to get a strong foundation in IT support first and ask later? We haven't had a new hire in years, let alone one that is interested in what I do, so this is new to me.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • After forced quit, “killall Finder” says “No matching processes…” but PID still exists?

    - by Old McStopher
    Here's one for ya. Upon a forced quit of the Finder with unsuccessful relaunch, "killall Finder" in terminal returns: "No matching processes belonging to you were found" Oddly enough, the PID for finder does actually show up after a "ps -A" to reveal all processes. But the time is perpetually listed as 0:00:00, upon repeated PID listings. I tried the following to manually launch it: open /System/Library/CoreServices/Finder.app But it puked: LSOpenFromURLSpec() failed with error -600 for the file /System/Library/CoreServices/Finder.app. Any other ideas on a Finder relaunch that don't involve rebooting? (I usually have 6 spaces open at once, each with a handful of apps and it's a pain reloading them all.)

    Read the article

  • One of my apache processes is huge - how can I find out why?

    - by Malcolm Box
    I'm running Apache 2.2.12 with mod_wsgi, hosting a Django site. Most of the apache child processes weigh in at about 125MB RSS, but occasionally I see one child balloon to 1GB RSS. At this point there's usually 1 huge process (1GB), a couple of large ones (500MB) and the rest are still ~125MB. These are the mod_wsgi daemon processes. I've tried using memory leak tracing in Python to see if it's the Django code, and I see no leaks. Looking in the logs doesn't show any particularly strange requests. I'm stumped on how to figure out what's causing this - any ideas? Also, any workaround ways to kill the large apache process when it gets too big, without bringing apache down? Some more details: Not using mod_php Using pre-fork

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >