Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 135/976 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Is there a limit on the number of mutex objects that can be created in a Windows process?

    - by young-phillip
    I'm writing a c# application that can create a series of request messages. Each message could have a response, that needs to be waited on by a consumer. Where the number of outstanding request messages is constrained, I have used the windows EVENT to solve this problem. However, I know there is a limit on how many EVENT objects can be created in a single process, and in this instance, its possible I might exceed that limit. Does anyone know if there is a similar limit on creation of mutex objects or semaphores? I know this can be solved by some sort of pool of shared resources, that are grabbed by consumers when they need to wait, but it would be more convenient if each request message could have its own sync object.

    Read the article

  • Does all SPWeb or SPSite instances automatically disposed when console app process has ended?

    - by Janis Veinbergs
    We have Best practices on using disposable object in SharePoint. But i`m thinking - can I skip these when using Console Application? That's some code I want to execute once and after that process has finished. Do or don't SPSite and SPWeb's remain opened somwhere? Why i`m asking this? I just don't want to stress when using something like var lists = from web in site.AllWebs.Cast<SPWeb>() where web is meeting workspace && list is task list select list then do some stuff on lists etc. Some serious resource leak there because webs get opened, filtered and NOT closed. So should I worry in console app?

    Read the article

  • Unable to delete file locked by same process -- weird!

    - by user300266
    I have an application written in PHP that uses a COM dll written in C#. The dll creates an image file by combining two other image files. The PHP script then takes over to do the housekeeping tasks of deleting the two source files and renaming the resulting combined file. The problem is the PHP script can't delete one of the source files because it's locked. The weird thing is that the process that has it locked is itself which in this case is the Apache Web Server. I have tried altering the C# dll to dispose of all bitmap and graphics objects prior to exiting, and yet the lock remains. My question is, what can I do to get the dll to let go and release the file locks. This is very frustrating.

    Read the article

  • Multi-process builds in Visual Studio 2010: Worth it?

    - by coryr
    I've started testing our C++ software with VS2010 and the build times are really bad (30-45 minutes, about double the VS2005 times). I've been reading about the /MP switch for multi-process compilation. Unfortunately, it is incompatible with some features that we use quite a bit like #import, incremental compilation, and precompiled headers. Have you had a similar project where you tried the /MP switch after turning off things like precompiled headers? Did you get faster builds? My machine is running 64-bit Windows 7 on a 4 core machine with 4 GB of RAM and a fast SSD storage. Virus scanner disabled and a pretty minimal software environment.

    Read the article

  • C#: Why am I getting "the process cannot access the file * because it is being used by another proce

    - by zxcvbnm
    I'm trying to convert bmp files in a folder to jpg, then delete the old files. The code works fine, except it can't delete the bmp's. DirectoryInfo di = new DirectoryInfo(args[0]); FileInfo[] files = di.GetFiles("*.bmp"); foreach (FileInfo file in files) { string newFile = file.FullName.Replace("bmp", "jpg"); Bitmap bm = (Bitmap)Image.FromFile(file.FullName); bm.Save(newFile, ImageFormat.Jpeg); } for (int i = 0; i < files.Length; i++) files[i].Delete(); The files aren't being used by another program/process like the error indicates, so I'm assuming the problem is here. But to me the code seems fine, since I'm doing everything sequentially. This is all that there is to the program too, so the error can't be caused by code elsewhere.

    Read the article

  • How I start a process to run logcat on Android?

    - by tangjie
    I want to read Android system level log file.So I use the following code: Process mLogcatProc = null; BufferedReader reader = null; try { mLogcatProc = Runtime.getRuntime().exec( new String[] { "logcat", "-d", "AndroidRuntime:E [Your Log Tag Here]:V *:S" }); reader = new BufferedReader(new InputStreamReader(mLogcatProc .getInputStream())); String line; final StringBuilder log = new StringBuilder(); String separator = System.getProperty("line.separator"); while ((line = reader.readLine()) != null) { log.append(line); log.append(separator); } } catch (IOException e) {} finally { if (reader != null) try { reader.close(); } catch (IOException e) {} } I also used in AndroidManifest.xml. But I can't read any line. The StringBuilder log is empty. And the method mLogcatProc.waitFor return 0. So how can I read the log ?

    Read the article

  • How can a large number of developers write software without either a cumbersome process or poor qual

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Can I attach a .NET TraceListener to an externally running process?

    - by BBlake
    I am developing an application scheduling program that will run other applications using System.Diagnostics.Process. The external applications are of various types (some .NET and some not). For those external apps that have trace logging enabled, is there a way that I can attach the tracelistener of the parent/calling application to listen to and record all the trace output from the child/called application to the parent application's trace output? This is not primarily for debugging purposes. This is more to track trace output from all the various scheduled applications by collecting it into one place as much as possible. The scheduler app is still in the early design stages, but will be .NET, and I'm trying to clear up potential design issues before I get into it too far.

    Read the article

  • How to launch a Windows process as 64-bit from 32-bit code?

    - by Jonas
    To pop up the UAC dialog in Vista when writing to the HKLM registry hive, we opt to not use the Win32 Registry API, as when Vista permissions are lacking, we'd need to relaunch our entire application with administrator rights. Instead, we do this trick: ShellExecute(hWnd, "runas" /* display UAC prompt on Vista */, windir + "\\Reg", "add HKLM\\Software\\Company\\KeyName /v valueName /t REG_MULTI_TZ /d ValueData", NULL, SW_HIDE); This solution works fine, besides that our application is a 32-bit one, and it runs the REG.EXE command as it would be a 32-bit app using the WOW compatibility layer! :( If REG.EXE is ran from the command line, it's properly ran in 64-bit mode. This matters, because if it's ran as a 32-bit app, the registry keys will end up in the wrong place due to registry reflection. So is there any way to launch a 64-bit app programmatically from a 32-bit app and not have it run using the WOW64 subsystem like its parent 32-bit process (i.e. a "*" suffix in the Task Manager)?

    Read the article

  • What would happen to GC if I run process with priority = RealTime?

    - by Bobb
    I have a C# app which runs with priority RealTime. It was all fine until I made few hectic changes in past 2 days. Now it runs out of memory in few hours. I am trying to find whether it is a memory leak I created of this is because I consume lot more objects than before and GC simply cant collect them because it runs with same priority. My question is - what could happen to GC when it tries to collect objects in application with RealTime priority (there is also at least one thread running with Highest thread priority)? (P.S. by realtime priority I mean Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime) Sorry forgot to tell. GC is in Server mode

    Read the article

  • C# What would happen to GC if I run process with priority = RealTime?

    - by Bobb
    I have a C# app which runs with priority RealTime. It was all fine until I made few hectic changes in past 2 days. Now it runs out of memory in few hours. I am trying to find whether it is a memory leak I created of this is because I consume lot more objects than before and GC simply cant collect them because it runs with same priority. My question is - what could happen to GC when it tries to collect objects in application with RealTime priority (there is also at least one thread running with Highest thread priority)? (P.S. by realtime priority I mean Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime)

    Read the article

  • how to catch the event when the particular application process is being suspended using task manager?

    - by Mohan
    I am developing a simple application where in we have predefined quotas on usage for each user on the system.. and if the quota is up..the system should logoff of the user account.. this will happen if the application is allowed to run.. but if the user is closing the application on his own ..the app should automaticaly logoff the account.. i did exactly that in writing forced logoff code in form-closing event.. but if we are closing the app/process using the taskmanager.. the form closing event is not being called. and so the user is able to continue even if his quota of time is up.. can anybody helpme out with this..

    Read the article

  • Correct way to textually report the remaining time on a long running process?

    - by Ryan
    So you have a long running process, perhaps with a progress bar, and you want a text estimate of the remaining time, eg: "5 minutes remaining" "30 seconds remaining" etc. If you don't actually want to report clock time (due to accuracy or resolution or update-rate issues) but want to stick to the text summary, what is the correct paradigm? Is "one minute" left displayed from 0 to 60 seconds? or from 1:00 to 1:59? Say there's 1:35 Left - is that "2 minutes remaining" or "1 minute remaining"? Do you just pare it down to "A few minutes left" when you're less than 3 minutes? What is the preferred (least user-frustrating) method?

    Read the article

  • Is it possible to remove folders from a web application build process in vs 2010?

    - by JL
    I had previously asked this question. At the time I was working with VS 2008. To restate the question. I have a web application that generates 1000's of small xml files in a certain directory. I would like to exclude this directory from the build process in visual studio 2010. With vs 2008 it was not possible. Has anything changed? Besides the general wait for VS to iterate through this directory with each build, it also strains my system resources, so I would like to exclude it from the project, but the dir and files need to physically exist on disk, because they are part of the application. Any OOB VS 2010 solutions, or any good workarounds? Thanks

    Read the article

  • Does Visual Studio run Tests with a less privileged process?

    - by Filip Ekberg
    I have an application that is supposed to read from the Registry and when executing a console application my registry access works perfectly. However when I move it over to a test this returns null: var masterKey = Registry.LocalMachine.OpenSubKey("path_to_my_key"); So my question is: Does Visual Studio run Tests with a less privileged process? I tested to see what user this gave me: var x = WindowsIdentity.GetCurrent().Name; and it gives me the same as in the console application. So I am a bit confused here.

    Read the article

  • Cannot open TUN/TAP dev /dev/as0t0: No such file or directory (errno=2)

    - by Mark
    I just attempted to install OpenVPN Access Server on my Debian VPS that uses OpenVZ. It installed fine, however when I try to start it from the administration panel, I get these errors: process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t0: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t1: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t2: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t3: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t4: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t5: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t6: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t7: No such file or directory (errno=2)'] service failed to start or returned error status Is there a solution for this?

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • what is ip 10.1.1.130 to which seems monitored by NT Kernel & System process on Windows 7?

    - by EndangeringSpecies
    I used netstat to see what is happening with network connections, and I see this weird ip address somehow listed together with PID 4 "NT Kernel & System", whatever that might be. Netstat describes it as a "local address" and there is no "foreign address" involved (btw, what are local and foreign addresses anyway?) In the column to the right there is neither "listening" nor "established" record, so no record at all there.

    Read the article

  • How to unlock files using handle.exe and process name?

    - by Radek
    I tried Unlocker 1.9.1 but it doesn't work correctly for me on Windows7 (worked ok on Windows XP) and also I tried LockHunter 2.0.2.103 x64 and reported a bug but .... LockHunter actually unlocks the file from GUI but not from command line. So I want to use handle.exe by SysInternals to unlock one file "TestPro.log". I know the absolut path if it helps. I can list and all processes that locked the file by executing C:\Windows\system32>c:\edutester\progs\handle testpro.log java.exe pid: 2120 type: File 338: C:\Users\Public\TestPro \TestPro Automation Framework\Logs\TestPro.log java.exe pid: 1004 type: File 934: C:\Users\Public\TestPro \TestPro Automation Framework\Logs\TestPro.log What I need to know how to unlock the file using above info from command line automatically. No user intervention is possible. Windows 7 64bit Microsoft Windows [Version 6.1.7601]

    Read the article

  • Prevent important OS X process being swapped out, without code change?

    - by purplie
    I want hotkey-based utilities like Quicksilver or Zooom to respond immediately. But if they have been idle for a while, they (I guess) get swapped out, and respond slowly, sometimes not even responding to the first few keystrokes I wanted to send to them. How can I encourage such processes (i.e. chosen processes, not all processes system wide) to remain in active memory? Or, am I misunderstanding the problem?

    Read the article

  • Linux disk IO load breakdown, by filesystem path and/or process?

    - by Ryan B. Lynch
    Does anyone have experience with a tool that can provide an indication of disk IO load by filesystem path. I use to 'iostat' utility, frequently, to learn how much disk activity is taking place on a Linux host. 'iostat' provides a per-device breakdown, so you can see activity on a particular block device. But it doesn't go any deeper than that--you can't, for instance, query the write load generated by 'httpd' in the directory '/var/log/httpd/'.

    Read the article

  • How do I prevent infinite recursion in X11 start-up process?

    - by chrisaycock
    I wasn't able to run X11 or Terminal after rebooting my Mac. After digging around, I got them to work when I commented-out this line in my .cshrc: xset b off It appears that xset will attempt to launch X11 if it isn't running already, and since X11 will launch the default shell through xterm and thus encounter the xset line above, we will have an infinite loop. I would like to keep the above line in my .cshrc. Is there a way to prevent X11 from launching itself?

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >