Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 181/976 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Is there a command like pstree for libraries?

    - by flashnode
    I need to determine whether a library named libunaSA.so is being called directly by the process or by another library called libtoki2.so. I guess what I'm looking for is a pstree for libraries. The system is running RHEL 5.3 Beta. This output shows the two libraries in the process map # grep -e toki -e una /proc/2335/maps 0043f000-004ad000 r-xp 00000000 08:02 543465 /usr/lib/libtoki2.so 004ad000-004c5000 rwxp 0006d000 08:02 543465 /usr/lib/libtoki2.so 01185000-01397000 r-xp 00000000 08:02 543503 /usr/lib/libunaSA.so 01397000-013dc000 rwxp 00211000 08:02 543503 /usr/lib/libunaSA.so This output shows that only the libtoki2.so library is in the current cache # ldconfig -p | grep -e una -e toki libtoki2.so (libc6) => /usr/lib/libtoki2.so libtoki.so.4.4.1 (libc6) => /usr/lib/libtoki.so.4.4.1 libtoki.so.2 (libc6) => /usr/lib/libtoki.so.2 I attached strace to the running process but it doesn't provide much output # strace -p 2335 Process 2335 attached - interrupt to quit futex(0xb7ef5bd8, FUTEX_WAIT, 2336, NULL Here's the output to ldd for each library # ldd /usr/lib/libtoki2.so linux-gate.so.1 => (0x00a0a000) libdl.so.2 => /lib/libdl.so.2 (0x001bd000) libstdc++-libc6.2-2.so.3 => /usr/lib/libstdc++-libc6.2-2.so.3 (0x00f3f000) libm.so.6 => /lib/libm.so.6 (0x00b27000) libc.so.6 => /lib/libc.so.6 (0x0043d000) /lib/ld-linux.so.2 (0x00742000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00110000) # ldd /usr/lib/libunaSA.so linux-gate.so.1 => (0x00244000) libpthread.so.0 => /lib/libpthread.so.0 (0x00baf000) libdl.so.2 => /lib/libdl.so.2 (0x007fa000) libstdc++-libc6.2-2.so.3 => /usr/lib/libstdc++-libc6.2-2.so.3 (0x009ce000) libm.so.6 => /lib/libm.so.6 (0x00c96000) libc.so.6 => /lib/libc.so.6 (0x004a2000) /lib/ld-linux.so.2 (0x00742000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00a9f000)

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • Office 2007 constantly crashes, logged as Event ID 1000

    - by Nori
    I have a user, who despite my best efforts, is having constant Office 2007 crashes. I've tried deleting their profile and setting it up again, repairing office, uninstalling completely and then reinstalling, and swapping out memory sticks. One event log error I keep getting is the following: (note all the Office errors are event id 1000) Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: EMSMDB32.DLL, version: 12.0.6539.5000, time stamp: 0x4c1246f8 Exception code: 0xc0000005 Fault offset: 0x0005d8e2 Faulting process id: 0xf6c Faulting application start time: 0x01cb6633f33384f3 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\EMSMDB32.DLL Report Id: 0d4a2eab-d231-11df-80a0-4061868f5d10 I also get this: Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9 Exception code: 0xc0000005 Fault offset: 0x002357a9 Faulting process id: 0x5e4 Faulting application start time: 0x01cb661f4546aa77 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\olmapi32.dll Report Id: a4a90658-d224-11df-80a0-4061868f5d10 The Excel error is this: Faulting application name: EXCEL.EXE, version: 12.0.6535.5002, time stamp: 0x4bd2a7f1 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdbdf Exception code: 0xe06d7363 Fault offset: 0x0000b727 Faulting process id: 0x14a8 Faulting application start time: 0x01cb61ab7bc0abab Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE Faulting module path: C:\Windows\syswow64\KERNELBASE.dll Report Id: ba0c454b-cd9e-11df-80a0-4061868f5d10 Also have gotten this for PowerPoint: Faulting application name: POWERPNT.EXE, version: 12.0.6500.5000, time stamp: 0x49a68f9d Faulting module name: COMShim.dll, version: 2010.3.325.110, time stamp: 0x4c51e0b1 Exception code: 0x40000015 Fault offset: 0x0001e388 Faulting process id: 0x1480 Faulting application start time: 0x01cb5fe9a0660e81 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\POWERPNT.EXE Faulting module path: C:\Program Files (x86)\FactSet\COMShim.dll Report Id: e03d2a21-cbdc-11df-9bc8-4061868f5d10 (Some of the above lines edited to keep you from scroll horizontally.) Lastly, I get this error several times a day, I don't think it is related but maybe it is: Failed extract of third-party root list from auto update cab at: http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab with error: A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Any ideas? This is driving me nuts.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • Starting nginx with systemctl fails, but running the command manually doesn't

    - by Ivan
    On Arch Linux, for some reason, when I try to start nginx with the command "systemctl start nginx", it fails, with this being the output of "systemctl status nginx": Loaded: loaded (/etc/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Wed 2013-10-30 16:22:17 EDT; 5s ago Process: 9835 ExecStop=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=126) Process: 3982 ExecStart=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 10967 ExecStartPre=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=126) Main PID: 3984 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nginx.service ...but when I run /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g "pid /run/nginx.pid; daemon on; master_process on;" and then /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g "pid /run/nginx.pid; daemon on; master_process on;" as root, all it does is return a warning, but works just fine: nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 Why is it doing that?

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • How to add a privilege to an account in Windows?

    - by mark
    Given: A VM running Windows 2008 I am logged on there using my domain account (SHUNRANET\markk) I have added the "Create global objects" privilege to my domain account: The VM is restarted (I know logout/logon is enough, but I had to restart) I logon again using the same domain account. It seems still to have the privilege: I run some process and examine its Security properties using the Process Explorer. The account does not seem to have the privilege: This is not an idle curiousity. I have a real problem, that without this privilege the named pipe WCF binding works neither on Windows 2008 nor on Windows 7! Here is an interesting discussion on this matter - http://social.msdn.microsoft.com/forums/en-US/wcf/thread/b71cfd4d-3e7f-4d76-9561-1e6070414620. Does anyone know how to make this work? Thanks. EDIT BTW, when I run the process elevated, everything is fine and the process explorer does display the privilege as expected: But I do not want to run it elevated. EDIT2 I equally welcome any solution. Be it configuration only or mixed with code. EDIT3 I have posted the same question on MSDN forums and they have redirected me to this page - http://support.microsoft.com/default.aspx?scid=kb;EN-US;132958. I am yet to determine the relevance of it, but it looks promising. Notice also that it is a completely coding solution that they propose, so whoever moved this post to the ServerFault - please reinstate it back in the StackOverflow.

    Read the article

  • Upstart multiple instances of service not working

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • Word 2010 creates multiple processes... sometimes

    - by Bill Sambrone
    I've run into a strange behavior when I migrated our users from Office 2007 / Vista to Office 2010 / Windows 7 (all 32-bit). They use a web based document management system called NetDocuments which stores all their .doc/.docx files. Generally, when they click on a doc from the browser window it fires up Word and opens the doc. Word has an add-in in it from NetDocs as well so it can upload the changed document directly back to the NetDocs server. I get a phone call when Word crashes, and every single time it has crashed I have witnessed multiple winword.exe processes running in task manager. I used process explorer to see what created the process, and it is all Internet Explorer. So far I have rolled them back to IE8 and the problem happens less frequently, but it still happens. When I try to duplicate the problem, I can make it happen sometimes if I open multiple documents very quickly. Using lightning fast alt-tab reflexes, I DO see that a 2nd WinWord process is created when a user clicks on a document, then it closes once the document is open. I think what is happening is that the secondary WinWord process that does some sort of NetDocs voodoo is getting stuck open. This behavior is new to Word 2010 / Windows 7 and google searching isn't coming up with much. I have seen a few posts that this is a known issue in certain circumstances and there is no "fix", but I thought it would good to ask others on this. Thanks!

    Read the article

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

  • Windows XP Loading Problem

    - by Sadeq Dousti
    Sometimes, my Windows XP does not load correctly. It shows the login screen, and when I click my username, it load the desktop background, but the Explorer does not show up. So, I cannot see the icons on the desktop, the start menu, etc. If I press Ctrl+Alt+Del to show Task Manager, I can run programs (like media player or browser) from File--New Task (Run...). Also, in the Task Manager, I see Explorer.exe running. I tried to kill and re-run it, but nothing happens. I used Sysinternals Process Explorer to see if there were any odd process or odd behavior, but nothing was fishy. After several restarts, the system finally worked as expected. But this is not permanent: Sometimes, when I restart the system, it works just as described above (Explorer does not show up). But sometimes it works normally. I used Kaspersky to search for viruses, but nothing showed up. I think the info presented above is not enough to pinpoint the problem. Yet you might be able to tell me about a tool or something, which I can use to give you more info, or even solve the problem. PS: I can easily use the Safe Mode. It does not seem to suffer from this problem. Hence, I suspect there's some process (service) for which Explorer is waiting, but that process runs into trouble (say a race ondition, or an infinite loop) and so Explorer stalls as well.

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • How to scan multiple pages from a book under Linux?

    - by rumtscho
    I want the process to look like: I choose the correct scan settings (dpi, color depth, etc) I lay the first page on the scanner and trigger the process The scanner scans the page and waits for me to position the next page correctly I confirm that the next page is ready for scanning Repeat the above two steps until I tell the scanner that there are no more pages to come The scanner saves everything into a single PDF. I tried both xsane and gscan2pdf. First problem: they want me to know how many pages will be scanned. This is already a nuisance, but I can do the counting if needed. The main problem is that in step 3, the scanner does not pause. It is probably optimised for being fed loose sheets. The next scan process is triggered automatically as soon as the CCD has returned to the start position. The time the scanner needs to return the CCD is very short and I can't turn the page and position the book properly. Is there a software which can do the scan process in the way I described above, or did I just miss a setting available in xsane or gscan2pdf to make the scanner pause? If it makes any difference, the scanner is an Epson Stylus SX620FW, I run it using the manufacturer-provided driver.

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • Starting programs from terminal then exiting terminal exits started programs?

    - by sherrellbc
    I really was unsure how to phrase the question title. What I mean is that when I use the terminal to start a program, most of the time when the terminal is closed it also exits the programs started from it. Now this makes sense if we look at it from a hierarchical standpoint of the terminal being the parent process which spawns child processes, and any halt of the parent causes subsequent halting of the children as well. However, I've noticed this to not always be the case. For example, I downloaded Sublime Text Editor and created a symlink in PATH. I can start this program by issuing a sublime command from the terminal, but subsequent closure of the terminal program does nothing to sublime. However, other times either the child process that was started it also closed or it hangs up and causes problems. tl;dr: Is it always the case that programs started from a closed parent process will be closed when the parent is exited? And if so, is there way to start a program from the terminal and then close the terminal without exiting the started process? The whole point here is to start programs from the terminal so I do not overly-populate my desktop with symlinks.

    Read the article

  • finding files that match a precise size: a multiple of 4096 bytes

    - by doub1ejack
    I have several drupal sites running on my local machine with WAMP installed (apache 2.2.17, php 5.3.4, and mysql 5.1.53). Whenever I try to visit the administrative page, the php process seems to die. From apache_error.log: [Fri Nov 09 10:43:26 2012] [notice] Parent: child process exited with status 255 -- Restarting. [Fri Nov 09 10:43:26 2012] [notice] Apache/2.2.17 (Win32) PHP/5.3.4 configured -- resuming normal operations [Fri Nov 09 10:43:26 2012] [notice] Server built: Oct 24 2010 13:33:15 [Fri Nov 09 10:43:26 2012] [notice] Parent: Created child process 9924 [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Child process is running [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Acquired the start mutex. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting 64 worker threads. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting thread to listen on port 80. Some research has led me to a php bug report on the '4096 byte bug'. I would like to see if I have any files whose filesize is a multiple of 4096 bytes, but I don't know how to do that. I have gitBash installed and can use most of the typical linux tools through that (find, grep, etc), but I'm not familiar enough with linux to figure it out on my own. Little help?

    Read the article

  • Upstart multiple instances of service not working.

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • Windows 7 constantly accessing hard drive [duplicate]

    - by Zohar
    Possible Duplicate: Tool which finds which process is causing the heavy hard drive activity? Did you notice that on Windows 7 (I use 64-bit) the hard drive LED is constantly blinking, which means that the OS is constantly wearing the hard drive by accessing it? It's something related to the system process, and it even occurs in safe mode, so I don't think it's a third party software problem. Has anyone experienced this problem as well, and is it a Windows problem, or caused by something else? Edit: My indexing service is reduced to indexing only the Start Menu. Even if it was set for the whole computer, it would eventually stop; that's not it. My friends also suffer from the same problem. Please answer my first question: have any of you have seen a Windows 7 machine whose hard drive LED is at rest? I'm also trying to track down the offending process using procmon and Resource Monitor, and it actually seems like a system process. It could also be svchost.exe, and I'm not sure which file they are accessing since I see a lot of activity which I can't make sense of. It's loading system DLLs, accessing registry keys, and other nonsense.

    Read the article

  • Perfmon % Processor Time vs. task manager's CPU usage

    - by nat
    I'm new to using Perfmon and performance monitoring in general (so go easy on me please ;) I know that Perfmon doesn't have anything exactly like Task Manager's CPU usage display, but I'm trying to figure out how to monitor user's CPU usage via Perfmon in a similar way, and trying to understand the measurements (or how to convert the numbers to get a similar understanding) For example, if in Task Manager, a particular user is consistently using more than 5% CPU, I would want to contact the user about it. I learn best by example, so here is exactly what I'm trying to do, with a specific example: This is for a 32-bit Dual Quad Core Windows 2003 web server (8 CPUs), there are many web sites on the server, each running within their own application pool/worker process ID. Through other research here I learned of a registry change that I made so that the PID shows up with the w3wp process so I can easily identify the site later by cross-referencing it. I set up a counter with the following settings: Process -> % Processor Time -> all instances Here is an example. Say I'm interested in "black line" user in this graph below, as his process is spiking quite high compared to all the other users: (I wasn't allowed to post the image as I'm a new user on this site.. I've uploaded the image to:) http://i35.tinypic.com/106yn8k.jpg So... using this as an example, I see that they have an AVERAGE % PROCESSOR TIME of 23.264 , and have spiked as high as 103.124 So what exactly does this 23.264 number mean to me? Is it similar to an average of Task Manager's CPU reading for this user? Or, since this server has 8 CPUs, should I divide this number by 8? (23.264/8 = 2.9% AVERAGE CPU LOAD?) Thanks in advance.

    Read the article

  • Java threads, wait time always 00:00:00-Producer/Consumer

    - by user3742254
    I am currently doing a producer consumer problem with a number of threads and have had to set priorities and waits to them to ensure that one thread, the security thread, runs last. I have managed to do this and I have managed to get the buffer working. The last thing that I am required to do is to show the wait time of threads that are too large for the buffer and to calculate the average wait time. I have included code to do so, but everything I run the program, the wait time is always returned as 00:00:00, and by extension, the average is returned as the same. I was speaking to one of my colleagues who said that it is not a matter of the code but rather a matter of the computer needing to work off of one processor, which can be adjusted in the task manager settings. He has an HP like myself but his program prints the wait time 180 times, whereas mine prints usually about 3-7 times and is only 00:00:01 on one instance before finishing when I have made the processor adjustments. My other colleague has an iMac and hers puts out an average of 42:00:34(42 minutes??) I am very confused about this because I can see no difference between our codes and like my colleague said, I was wondering is it a computer issue. I am obviously concerned as I wanted to make sure that my code correctly calculated an average wait time, but that is impossible to tell when the wait times always show as 00:00:00. To calculate the thread duration, including the time it entered and exited the buffer was done by using a timestamp import, and then subtracting start time from end time. Is my code correct for this issue or is there something which is missing? I would be very grateful for any solutions. Below is my code: My buffer class package com.Com813cw; import java.text.DateFormat; import java.text.SimpleDateFormat; /** * Created by Rory on 10/08/2014. */ class Buffer { private int contents, count = 0, process = 200; private int totalRam = 1000; private boolean available = false; private long start, end, wait, request = 0; private DateFormat time = new SimpleDateFormat("ss:SSS"); public int avWaitTime =0; public void average(){ System.out.println("Average Application Request wait time: "+ time.format(request/count)); } public synchronized int get() { while (process <= 500) { try { wait(); } catch (InterruptedException e) { } } process -= 200; System.out.println("CPU After Process " + process); notifyAll(); return contents; } public synchronized void put(int value) { if (process <= 500) { process += value; } else { start = System.currentTimeMillis(); try { wait(); } catch (InterruptedException e) { } end = System.currentTimeMillis(); wait = end - start; count++; request += wait; System.out.println("Application Request Wait Time: " + time.format(wait)); process += value; contents = value; calcWait(wait, count); } notifyAll(); } public void calcWait(long wait, int count){ this.avWaitTime = (int) (wait/count); } public void printWait(){ System.out.println("Wait time is " + time.format(this.avWaitTime)); } } My spotify class package com.Com813cw; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class Spotify extends Thread { private Buffer buffer; private int number; private int bytes = 250; public Spotify(Buffer c, int number) { buffer = c; this.number = number; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 20; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Spotify has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that Spotify thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My BubbleWitch class package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 10/08/2014. */ class BubbleWitch2 extends Thread { private Buffer buffer; private int number; private int bytes = 100; public BubbleWitch2(Buffer c, int number) { buffer = c; this.number=number ; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 10; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("BubbleWitch2 has finished executing."); System.out.println("Time taken to execute was " +timeTaken+ " milliseconds"); System.out.println("Time Bubblewitch2 thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My Test class package com.Com813cw; /** * Created by Rory on 10/08/2014. */ public class ProducerConsumerTest { public static void main(String[] args) throws InterruptedException { Buffer c = new Buffer(); BubbleWitch2 p1 = new BubbleWitch2(c,1); Processor c1 = new Processor(c, 1); Spotify p2 = new Spotify(c, 2); SystemManagement p3 = new SystemManagement(c, 3); SecurityUpdate p4 = new SecurityUpdate(c, 4, p1, p2, p3); p1.setName("BubbleWitch2 "); p2.setName("Spotify "); p3.setName("System Management "); p4.setName("Security Update "); p1.setPriority(10); p2.setPriority(10); p3.setPriority(10); p4.setPriority(5); c1.start(); p1.start(); p2.start(); p3.start(); p4.start(); p2.join(); p3.join(); p4.join(); c.average(); System.exit(0); } } My security update package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class SecurityUpdate extends Thread { private Buffer buffer; private int number; private int bytes = 150; private int process = 0; public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException { buffer = c; this.number = number; bubbleWitch2.join(); spotify.join(); systemManagement.join(); } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 15; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes"); try { sleep(1500); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Security Update has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that SecurityUpdate thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("------------------------------"); } } I'd be grateful as I said for any help as this is the last and most frustrating obstacle.

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >