Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 38/401 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • 1600+ 'postfix-queue' processes - OK to have this many?

    - by atomicguava
    I have a Plesk 9.5.4 CentOS server running Postfix. I had been having massive problems with the mailq being full of 'double-bounce' email messages containing errors relating to 'Queue File Write Error', but I believe these are now fixed thanks to this thread. My new problem is that when I run top, I can see lots of processes called 'postfix-queue' and have fairly high load: top - 13:59:44 up 6 days, 21:14, 1 user, load average: 2.33, 2.19, 1.96 Tasks: 1743 total, 1 running, 1742 sleeping, 0 stopped, 0 zombie Cpu(s): 5.1%us, 8.8%sy, 0.0%ni, 85.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3145728k total, 1950640k used, 1195088k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1324 apache 16 0 344m 33m 5664 S 21.7 1.1 0:03.17 httpd 32443 apache 15 0 350m 36m 6864 S 14.4 1.2 0:13.83 httpd 1678 root 15 0 13948 2568 952 R 2.0 0.1 0:00.37 top 1890 mysql 15 0 689m 318m 7600 S 1.0 10.4 219:45.23 mysqld 1394 apache 15 0 352m 41m 5972 S 0.7 1.3 0:03.91 httpd 1369 apache 15 0 344m 33m 5444 S 0.3 1.1 0:02.03 httpd 1592 apache 15 0 349m 37m 5912 S 0.3 1.2 0:02.52 httpd 1633 apache 15 0 336m 20m 1828 S 0.3 0.7 0:00.01 httpd 1952 root 19 0 335m 28m 10m S 0.3 0.9 1:35.41 httpd 1 root 15 0 10304 732 612 S 0.0 0.0 0:04.41 init 1034 mhandler 15 0 11520 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1036 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1041 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1043 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1063 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1068 mhandler 15 0 11516 1128 860 S 0.0 0.0 0:00.00 postfix-queue 1071 mhandler 17 0 11512 1152 884 S 0.0 0.0 0:00.00 postfix-queue 1072 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1081 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1082 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1089 popuser 15 0 33892 1972 1200 S 0.0 0.1 0:00.02 pop3d 1116 mhandler 16 0 11516 1164 884 S 0.0 0.0 0:00.00 postfix-queue 1117 mhandler 15 0 11516 1124 860 S 0.0 0.0 0:00.00 postfix-queue 1120 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1121 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1130 mhandler 17 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1131 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1149 root 17 -4 12572 680 356 S 0.0 0.0 0:00.00 udevd 1181 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1183 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1224 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1225 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1228 apache 15 0 345m 34m 5472 S 0.0 1.1 0:04.64 httpd 1241 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1242 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1251 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1252 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1258 apache 15 0 349m 37m 5444 S 0.0 1.2 0:01.28 httpd When I run ps -Al | grep -c postfix-queue it returns 1618! My question is this: is this normal or is there something else going wrong with Postfix? Right now, if I run mailq it is empty, and qshape deferred / qshape active are empty too. Thanks in advance for your help.

    Read the article

  • Issues with signal handling [closed]

    - by user34790
    I am trying to actually study the signal handling behavior in multiprocess system. I have a system where there are three signal generating processes generating signals of type SIGUSR1 and SIGUSR1. I have two handler processes that handle a particular type of signal. I have another monitoring process that also receives the signals and then does its work. I have a certain issue. Whenever my signal handling processes generate a signal of a particular type, it is sent to the process group so it is received by the signal handling processes as well as the monitoring processes. Whenever the signal handlers of monitoring and signal handling processes are called, I have printed to indicate the signal handling. I was expecting a uniform series of calls for the signal handlers of the monitoring and handling processes. However, looking at the output I could see like at the beginning the monitoring and signal handling processes's signal handlers are called uniformly. However, after I could see like signal handler processes handlers being called in a burst followed by the signal handler of monitoring process being called in a burst. Here is my code and output #include <iostream> #include <sys/types.h> #include <sys/wait.h> #include <sys/time.h> #include <signal.h> #include <cstdio> #include <stdlib.h> #include <sys/ipc.h> #include <sys/shm.h> #define NUM_SENDER_PROCESSES 3 #define NUM_HANDLER_PROCESSES 4 #define NUM_SIGNAL_REPORT 10 #define MAX_SIGNAL_COUNT 100000 using namespace std; volatile int *usrsig1_handler_count; volatile int *usrsig2_handler_count; volatile int *usrsig1_sender_count; volatile int *usrsig2_sender_count; volatile int *lock_1; volatile int *lock_2; volatile int *lock_3; volatile int *lock_4; volatile int *lock_5; volatile int *lock_6; //Used only by the monitoring process volatile int monitor_count; volatile int usrsig1_monitor_count; volatile int usrsig2_monitor_count; double time_1[NUM_SIGNAL_REPORT]; double time_2[NUM_SIGNAL_REPORT]; //Used only by the main process int total_signal_count; //For shared memory int shmid; const int shareSize = sizeof(int) * (10); double timestamp() { struct timeval tp; gettimeofday(&tp, NULL); return (double)tp.tv_sec + tp.tv_usec / 1000000.; } pid_t senders[NUM_SENDER_PROCESSES]; pid_t handlers[NUM_HANDLER_PROCESSES]; pid_t reporter; void signal_catcher_1(int); void signal_catcher_2(int); void signal_catcher_int(int); void signal_catcher_monitor(int); void signal_catcher_main(int); void terminate_processes() { //Kill the child processes int status; cout << "Time up terminating the child processes" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); //Wait for the child processes to finish for(int i=0; i<NUM_SENDER_PROCESSES; i++) { waitpid(senders[i], &status, 0); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { waitpid(handlers[i], &status, 0); } waitpid(reporter, &status, 0); } int main(int argc, char *argv[]) { if(argc != 2) { cout << "Required parameters missing. " << endl; cout << "Option 1 = 1 which means run for 30 seconds" << endl; cout << "Option 2 = 2 which means run until 100000 signals" << endl; exit(0); } int option = atoi(argv[1]); pid_t pid; if(option == 2) { if(signal(SIGUSR1, signal_catcher_main) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, signal_catcher_main) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } if(signal(SIGINT, signal_catcher_int) == SIG_ERR) { perror("3"); exit(1); } /////////////////////////////////////////////////////////////////////////////////////// ////////////////////// Initializing the shared memory ///////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// cout << "Initializing the shared memory" << endl; if ((shmid=shmget(IPC_PRIVATE,shareSize,IPC_CREAT|0660))< 0) { perror("shmget fail"); exit(1); } usrsig1_handler_count = (int *) shmat(shmid, NULL, 0); usrsig2_handler_count = usrsig1_handler_count + 1; usrsig1_sender_count = usrsig2_handler_count + 1; usrsig2_sender_count = usrsig1_sender_count + 1; lock_1 = usrsig2_sender_count + 1; lock_2 = lock_1 + 1; lock_3 = lock_2 + 1; lock_4 = lock_3 + 1; lock_5 = lock_4 + 1; lock_6 = lock_5 + 1; //Initialize them to be zero *usrsig1_handler_count = 0; *usrsig2_handler_count = 0; *usrsig1_sender_count = 0; *usrsig2_sender_count = 0; *lock_1 = 0; *lock_2 = 0; *lock_3 = 0; *lock_4 = 0; *lock_5 = 0; *lock_6 = 0; cout << "End of initializing the shared memory" << endl; ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////// End of initializing the shared memory /////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////Registering the signal handlers/////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal handlers" << endl; for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { if((pid = fork()) == 0) { if(i%2 == 0) { struct sigaction action; action.sa_handler = signal_catcher_1; sigset_t block_mask; action.sa_flags = 0; sigaction(SIGUSR1,&action,NULL); if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1 ,SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } struct sigaction action; action.sa_handler = signal_catcher_2; action.sa_flags = 0; sigaction(SIGUSR2,&action,NULL); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { //cout << "Registerd the handler " << pid << endl; handlers[i] = pid; } } cout << "End of registering the signal handlers" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////End of registering the signal handlers ////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////Registering the monitoring process ////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the monitoring process" << endl; if((pid = fork()) == 0) { struct sigaction action; action.sa_handler = signal_catcher_monitor; sigemptyset(&action.sa_mask); sigset_t block_mask; sigemptyset(&block_mask); sigaddset(&block_mask,SIGUSR1); sigaddset(&block_mask,SIGUSR2); action.sa_flags = 0; action.sa_mask = block_mask; sigaction(SIGUSR1,&action,NULL); sigaction(SIGUSR2,&action,NULL); if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { cout << "Monitor's pid is " << pid << endl; reporter = pid; } cout << "End of registering the monitoring process" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////End of registering the monitoring process//////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Sleep to make sure that the monitor and handler processes are well initialized and ready to handle signals sleep(5); ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////Registering the signal generators/////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal generators" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { if((pid = fork()) == 0) { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } srand(i); while(true) { int signal_id = rand()%2 + 1; if(signal_id == 1) { killpg(getpgid(getpid()), SIGUSR1); while(__sync_lock_test_and_set(lock_4,1) != 0) { } (*usrsig1_sender_count)++; *lock_4 = 0; } else { killpg(getpgid(getpid()), SIGUSR2); while(__sync_lock_test_and_set(lock_5,1) != 0) { } (*usrsig2_sender_count)++; *lock_5=0; } int r = rand()%10 + 1; double s = (double)r/100; sleep(s); } exit(0); } else { //cout << "Registered the sender " << pid << endl; senders[i] = pid; } } //cout << "End of registering the signal generators" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////End of registering the signal generators/////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Either sleep for 30 seconds and terminate the program or if the number of signals generated reaches 10000, terminate the program if(option = 1) { sleep(90); terminate_processes(); } else { while(true) { if(total_signal_count >= MAX_SIGNAL_COUNT) { terminate_processes(); } else { sleep(0.001); } } } } void signal_catcher_1(int the_sig) { while(__sync_lock_test_and_set(lock_1,1) != 0) { } (*usrsig1_handler_count) = (*usrsig1_handler_count) + 1; cout << "Signal Handler 1 " << *usrsig1_handler_count << endl; __sync_lock_release(lock_1); } void signal_catcher_2(int the_sig) { while(__sync_lock_test_and_set(lock_2,1) != 0) { } (*usrsig2_handler_count) = (*usrsig2_handler_count) + 1; __sync_lock_release(lock_2); } void signal_catcher_main(int the_sig) { while(__sync_lock_test_and_set(lock_6,1) != 0) { } total_signal_count++; *lock_6 = 0; } void signal_catcher_int(int the_sig) { for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); exit(3); } void signal_catcher_monitor(int the_sig) { cout << "Monitoring process " << *usrsig1_handler_count << endl; } Here is the initial segment of output Monitoring process 0 Monitoring process 0 Monitoring process 0 Monitoring process 0 Signal Handler 1 1 Monitoring process 2 Signal Handler 1 2 Signal Handler 1 3 Signal Handler 1 4 Monitoring process 4 Monitoring process Signal Handler 1 6 Signal Handler 1 7 Monitoring process 7 Monitoring process 8 Monitoring process 8 Signal Handler 1 9 Monitoring process 9 Monitoring process 9 Monitoring process 10 Signal Handler 1 11 Monitoring process 11 Monitoring process 12 Signal Handler 1 13 Signal Handler 1 14 Signal Handler 1 15 Signal Handler 1 16 Signal Handler 1 17 Signal Handler 1 18 Monitoring process 19 Signal Handler 1 20 Monitoring process 20 Signal Handler 1 21 Monitoring process 21 Monitoring process 21 Monitoring process 22 Monitoring process 22 Monitoring process 23 Signal Handler 1 24 Signal Handler 1 25 Monitoring process 25 Signal Handler 1 27 Signal Handler 1 28 Signal Handler 1 29 Here is the segment when the signal handler processes signal handlers are called in a burst Signal Handler 1 456 Signal Handler 1 457 Signal Handler 1 458 Signal Handler 1 459 Signal Handler 1 460 Signal Handler 1 461 Signal Handler 1 462 Signal Handler 1 463 Signal Handler 1 464 Signal Handler 1 465 Signal Handler 1 466 Signal Handler 1 467 Signal Handler 1 468 Signal Handler 1 469 Signal Handler 1 470 Signal Handler 1 471 Signal Handler 1 472 Signal Handler 1 473 Signal Handler 1 474 Signal Handler 1 475 Signal Handler 1 476 Signal Handler 1 477 Signal Handler 1 478 Signal Handler 1 479 Signal Handler 1 480 Signal Handler 1 481 Signal Handler 1 482 Signal Handler 1 483 Signal Handler 1 484 Signal Handler 1 485 Signal Handler 1 486 Signal Handler 1 487 Signal Handler 1 488 Signal Handler 1 489 Signal Handler 1 490 Signal Handler 1 491 Signal Handler 1 492 Signal Handler 1 493 Signal Handler 1 494 Signal Handler 1 495 Signal Handler 1 496 Signal Handler 1 497 Signal Handler 1 498 Signal Handler 1 499 Signal Handler 1 500 Signal Handler 1 501 Signal Handler 1 502 Signal Handler 1 503 Signal Handler 1 504 Signal Handler 1 505 Signal Handler 1 506 Here is the segment when the monitoring processes signal handlers are called in a burst Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Why isn't it uniform afterwards. Why are they called in a burst?

    Read the article

  • how to manage the use of both $_SERVER['PHP_SELF'] in action of form and htaccess for urlrewriting t

    - by OM The Eternity
    how to manage the use of both $_SERVER['PHP_SELF'] in action of form and htaccess for url-rewriting? If I submit a form and in the action attribute of it i pass "$_SERVER['PHP_SELF']" and at the same time i am using url rewriting for the same page.. then these two things will contradict each other resulting in the display of action value of form in address bar.. so how can i manage this to get the url rewrited form of $_SERVER['PHP_SELF']? here the rewrite rule is to change the name of url file likewise as if form is submitted to any file say direction.php then rewrite will change it to something 30/redirect.html

    Read the article

  • How do you manage your time as a team leader?

    - by Bryan Slatner
    Where I work, my role has been evolving from a pure development role to team leadership. I find that this suits me, and I'm generally enjoying it. One aspect of the job that continually vexes me, though, is time management. My day used to be pure coding. Now, I still have a largely full plate of coding duties, but I'm expected to mentor other developers, work on requirements, make design decisions for other developers, evaluate bug reports from users, assign them to developers, and so on. I find that my day has become on interruption after another and the prolonged periods of sustained concentration needed to get any actual quality coding done are becoming rarer and rarer. Today, I finally grabbed my laptop and escaped to a coffee shop so I could get some actual work done. How do the team leads here manage their day -- or manage their workplace -- so they don't let their administrative tasks overwhelm them?

    Read the article

  • Software to manage "application profiles", which services are running, etc., on Windows?

    - by Lasse V. Karlsen
    I am looking for a particular type of program. I use my computers in various "modes", like gaming, programming, just plain surfing, etc. What I'd like to find is a program that can help me manage these modes. For instance, while programming I might use SQL Server, but while gaming I don't want those services running, but perhaps I'd like Steam to run instead. Basically, the program type I'm looking for is a visual program that allow me to quickly switch modes, and when I do, the program would start and stop the necessary services and applications in order to leave one mode and enter another. I've looked at the programs related to startup management, and I haven't found one that lets me do what I want. At the moment I have batch files, but they're not very good at conveying problems or other things, I'd like a more visual program.

    Read the article

  • How should I remotely manage Dell Poweredge 2850 running Ubuntu server?

    - by Saul
    First I've got to say I'm a Linux / Ubuntu novice, so go gentle on me as I'm on day 3. I've managed to get Ubuntu server Ubuntu 8.04 LTS installed and running on the Poweredge 1850 I bought off ebay. The box will go in a rack at my office but I want to be able to work on it and power on and off from home and I gather that (maybe) IPMI over LAN might be the way to do this, or maybe its something to do with BMC or something? I want to be able to administer/manage from a client PC at home running XP. I will be configuring the office router to port forward port 80 and 443 to the Ubuntu server running Apache2, and I'm puzzled about how the remote management works (unless it comes on a different port forwarded to a different internal IP) Thanks for any help

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • How do you manage perl modules on a Debian system?

    - by nagul
    I'd like to know if you have a method for managing perl modules on your Debian system, with respect to the following: Installing new modules Listing of manually installed modules Checking dependencies, and uninstalling modules I have looked at this perlmonks article for background reading: What is the best way to install CPAN modules on Debian? I have previously installed perl modules using the CPAN module. I have also used dh-make-perl in some cases, when following instructions to build other packages that had perl dependencies. I'd like to institute a coherent policy on my machine so I can better manage how and where the modules are installed, and reduce the chance of breaking perl on my system. I would strongly like a system where I can detect and uninstall modules that are no longer being used.

    Read the article

  • Do you know of any ways to effectively manage a triple screen setup brightness?

    - by nizm0
    I use a desktop with multiple monitors I have tried F.lux and it is a great app for changing the color from 3500k-6500k... but doesn't adjust screen brightness. (I recommend you try it) I've googled for hours trying to find an app that will work to let me easily manage screen brightness either by a "dim when idle" or a dynamically updating screen brightness based on the time of day. Windows 7 desktop doesn't seem to have options for me to adjust brightness on the system, and manually changing three separate monitors is too much of a chore? Anybody have some recommendations? Perhaps a way to script this, knowledge of how to use the supposed dynamic screen brightness functions that Windows was supposed to add in Windows 7?

    Read the article

  • Tools to manage large network of heterogeneous web applications?

    - by Andrew
    I recently started a new job where I've been tasked with managing a global network of heterogenous web applications. There's very little documentation. My first order of business is to create an inventory of all of the web applications. Are there any tools out there to manage a large group of web apps? I'd like to collect a large dataset for each website including: logins for web based control panels logins to FTP/ssh accounts Google analytics tracking code for each site 3rd party libraries used SSL certs, issuers, and expiration dates etc I know I could keep the information in Excel or build a custom database, but I'm hoping there's already a tool out there to help me with this.

    Read the article

  • Use a media player in Linux just to play files from an iPod device (no sync, no manage, just play)?

    - by Somebody still uses you MS-DOS
    I have an ipod classic 160gb, that I sync with my machine at home. I use Linux at work, and want to just plug my ipod and just listen to the tracks, with all the playlists and such. I don't want to sync nothing, I just want to listen to the tracks as if I was using the ipod itself. Why? Because this way I can use the usb port. So, I don't want to manage my ipod in Linux, I just want to listen to the tracks on it in Linux, like it was a local library but it's instead in my ipod. (I've tried gtkpod, it works to show my files, but I can't play, shuffle, etc. It would be interesting to have a complete audio software to handle everything like it was a local library)

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >