Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 47/971 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • Taskbar Meters Turn Your Taskbar into a System Resource Monitor

    - by Jason Fitzpatrick
    If you’re looking for some simple hardware monitoring tools that don’t clutter up your screen real estate but are right in front of you when you need them, Taskbar Meters sit unobtrusively right on the Windows taskbar. Open source, lightweight, and portable Taskbar Meters is actually a set of three applications. There is one for monitoring memory use, one for CPU use, and one for disk activity. Using the application is as simple as running the specific app for the monitoring you want (we have all three running in the screenshot here) and adjusting the sliders to set the update frequency and the percent utilization at which the meters turn from green, to yellow, to red. If you’re testing software loads and benchmarking Taskbar Meters doesn’t offer the kind of fine-tooth-comb view into system performance that you’ll need but for casual “What’s going on with my machine?” monitoring, it’s unobtrusive and effective. Taskbar Meters is an open source set of portable applications, Windows 7 only. Taskbar Meters [Codeplex] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing

    Read the article

  • Using a portable USB monitor in Ubuntu 13.04 (AOC e1649Fwu - DisplayLink)

    Having access to a little bit of IT hardware extravaganza isn't that easy here in Mauritius for exactly two reasons - either it is simply not available or it is expensive like nowhere. Well, by chance I came across an advert by a local hardware supplier and their offer of the week caught my attention - a portable USB monitor. Sounds cool, and the specs are okay as well. It's completely driven via USB 2.0, has a light weight, the dimensions would fit into my laptop bag and the resolution of 1366 x 768 pixels is okay for a second screen. Long story, short ending: I called them and only got to understand that they are out of stock - how convenient! Well, as usual I left some contact details and got the regular 'We call you back' answer. Surprisingly, I didn't receive a phone call as promised and after starting to complain via social media networks they finally came back to me with new units available - and *drum-roll* still the same price tag as promoted (and free delivery on top as one of their employees lives in Flic en Flac). Guess, it was a no-brainer to get at least one unit to fool around with. In worst case it might end up as image frame on the shelf or so... The usual suspects... Ubuntu first! Of course, the packing mentions only Windows or Mac OS as supported operating systems and without hesitation at all, I hooked up the device on my main machine running on Ubuntu 13.04. Result: Blackout... Hm, actually not the situation I was looking for but okay can't be too difficult to get this piece of hardware up and running. Following the output of syslogd (or dmesg if you prefer) the device has been recognised successfully but we got stuck in the initialisation phase. Oct 12 08:17:23 iospc2 kernel: [69818.689137] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 12 08:17:23 iospc2 kernel: [69818.800306] usb 2-4: device descriptor read/64, error -32Oct 12 08:17:24 iospc2 kernel: [69819.043620] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 08:17:24 iospc2 kernel: [69819.043630] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 08:17:24 iospc2 kernel: [69819.043636] usb 2-4: Product: e1649FwuOct 12 08:17:24 iospc2 kernel: [69819.043642] usb 2-4: Manufacturer: DisplayLinkOct 12 08:17:24 iospc2 kernel: [69819.043647] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 08:17:24 iospc2 kernel: [69819.046073] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 08:17:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 08:17:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 12 08:17:30 iospc2 kernel: [69825.411220] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 12 08:17:30 iospc2 kernel: [69825.498778] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 12 08:17:30 iospc2 kernel: [69825.498786] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 12 08:17:30 iospc2 kernel: [69825.498909] usbcore: registered new interface driver udl The device has been recognised as USB device without any question and it is listed properly: # lsusb...Bus 002 Device 005: ID 17e9:4107 DisplayLink ... A quick and dirty research on the net gave me some hints towards the udlfb framebuffer device for USB DisplayLink devices. By default this kernel module is blacklisted $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udl#blacklist udlblacklist udlfb and it is recommended to load it manually. So, unloading the whole udl stack and giving udlfb a shot: Oct 12 08:22:31 iospc2 kernel: [70126.642809] usbcore: registered new interface driver udlfb But still no reaction on the external display which supposedly should have been on and green. Display okay? Test run on Windows Just to be on the safe side and to exclude any hardware related defects or whatsoever - you never know what happened during delivery. I moved the display to a new position on the opposite side of my laptop, installed the display drivers first in Windows Vista (I know, I know...) as recommended in the manual, and then finally hooked it up on that machine. Tada! Display has been recognised correctly and I have a proper choice between cloning and extending my desktop. Testing whether the display is working properly - using Windows Vista Okay, good to know that there is nothing wrong on the hardware side just software... Back to Ubuntu - Kernel too old Some more research on Google and various hits recommend that the original displaylink driver has been merged into the recent kernel development and one should manually upgrade the kernel image (and both header) packages for Ubuntu. At least kernel 3.9 or higher would be necessary, and so I went out to this URL: http://kernel.ubuntu.com/~kernel-ppa/mainline/ and I downloaded all the good stuff from the v3.9-raring directory. The installation itself is easy going via dpkg: $ sudo dpkg -i linux-image-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb$ sudo dpkg -i linux-headers-3.9.0-030900_3.9.0-030900.201304291257_all.deb$ sudo dpkg -i linux-headers-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb As with any kernel upgrades it is necessary to restart the system in order to use the new one. Said and done: $ uname -r3.9.0-030900-generic And now connecting the external display gives me the following output in /var/log/syslog: Oct 12 17:51:36 iospc2 kernel: [ 2314.984293] usb 2-4: new high-speed USB device number 6 using ehci-pciOct 12 17:51:36 iospc2 kernel: [ 2315.096257] usb 2-4: device descriptor read/64, error -32Oct 12 17:51:36 iospc2 kernel: [ 2315.337105] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 17:51:36 iospc2 kernel: [ 2315.337115] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 17:51:36 iospc2 kernel: [ 2315.337122] usb 2-4: Product: e1649FwuOct 12 17:51:36 iospc2 kernel: [ 2315.337127] usb 2-4: Manufacturer: DisplayLinkOct 12 17:51:36 iospc2 kernel: [ 2315.337132] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338292] udlfb: DisplayLink e1649Fwu - serial #FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338299] udlfb: vid_17e9&pid_4107&rev_0129 driver's dlfb_data struct at ffff880117e59000Oct 12 17:51:36 iospc2 kernel: [ 2315.338303] udlfb: console enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338306] udlfb: fb_defio enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338309] udlfb: shadow enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338468] udlfb: vendor descriptor length:17 data:17 5f 01 0015 05 00 01 03 00 04Oct 12 17:51:36 iospc2 kernel: [ 2315.338473] udlfb: DL chip limited to 1500000 pixel modesOct 12 17:51:36 iospc2 kernel: [ 2315.338565] udlfb: allocated 4 65024 byte urbsOct 12 17:51:36 iospc2 kernel: [ 2315.343592] hid-generic 0003:17E9:4107.0009: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 17:51:36 iospc2 mtp-probe: checking bus 2, device 6: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 17:51:36 iospc2 mtp-probe: bus: 2, device: 6 was not an MTP deviceOct 12 17:51:36 iospc2 kernel: [ 2315.426583] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.426589] udlfb: Reallocating framebuffer. Addresses will change!Oct 12 17:51:36 iospc2 kernel: [ 2315.428338] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.428343] udlfb: set_par mode 1366x768Oct 12 17:51:36 iospc2 kernel: [ 2315.430620] udlfb: DisplayLink USB device /dev/fb1 attached. 1366x768 resolution. Using 4104K framebuffer memory Okay, that's looks more promising but still only blackout on the external screen... And yes, due to my previous modifications I swapped the blacklisted kernel modules: $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udlblacklist udl#blacklist udlfb Silly me! Okay, back to the original situation in which udl is allowed and udlfb blacklisted. Now, the logging looks similar to this and the screen shows those maroon-brown and azure-blue horizontal bars as described on other online resources. Oct 15 21:27:23 iospc2 kernel: [80934.308238] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 15 21:27:23 iospc2 kernel: [80934.420244] usb 2-4: device descriptor read/64, error -32Oct 15 21:27:24 iospc2 kernel: [80934.660822] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 15 21:27:24 iospc2 kernel: [80934.660832] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 15 21:27:24 iospc2 kernel: [80934.660838] usb 2-4: Product: e1649FwuOct 15 21:27:24 iospc2 kernel: [80934.660844] usb 2-4: Manufacturer: DisplayLinkOct 15 21:27:24 iospc2 kernel: [80934.660850] usb 2-4: SerialNumber: FJBD7HA000778Oct 15 21:27:24 iospc2 kernel: [80934.663391] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 15 21:27:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 15 21:27:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 15 21:27:25 iospc2 kernel: [80935.742407] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 15 21:27:25 iospc2 kernel: [80935.834403] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 15 21:27:25 iospc2 kernel: [80935.834416] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 15 21:27:25 iospc2 kernel: [80935.836389] usbcore: registered new interface driver udlOct 15 21:27:25 iospc2 kernel: [80936.021458] [drm] write mode info 153 Next, it's time to enable the display for our needs... This can be done either via UI or console, just as you'd prefer it. Adding the external USB display under Linux isn't an issue after all... Settings Manager => Display Personally, I like the console. With the help of xrandr we get the screen identifier first $ xrandrScreen 0: minimum 320 x 200, current 3200 x 1080, maximum 32767 x 32767LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 331mm x 207mm...DVI-0 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm   1366x768       60.0*+ and then give it the usual shot with auto-configuration. Let the system decide what's best for your hardware... $ xrandr --output DVI-0 --off$ xrandr --output DVI-0 --auto And there we go... Cloned output of main display: New kernel, new display... The external USB display works out-of-the-box with a Linux kernel > 3.9.0. Despite of a good number of resources it is absolutely not necessary to create a Device or Screen section in one of Xorg.conf files. This information belongs to the past and is not valid on kernel 3.9 or higher. Same hardware but Windows 8 Of course, I wanted to know how the latest incarnation from Redmond would handle the new hardware... Flawless! Most interesting aspect here: I did not use the driver installation medium on purpose. And I was right... not too long afterwards a dialog with the EULA of DisplayLink appeared on the main screen. And after confirmation of same it took some more seconds and the external USB monitor was ready to rumble. Well, and not only that one... but see for yourself. This time Windows 8 was the easiest solution after all. Resume I can highly recommend this type of hardware to anyone asking me. Although, it's dimensions are 15.6" it is actually lighter than my Samsung Galaxy Tab 10.1 and it still fits into my laptop bag without any issues. From now on... no more single screen while developing software on the road!

    Read the article

  • Cannot set monitor to native resolution

    - by S B
    problem is similar to so many other users, but solutions found do not work. Background: Fresh install of 12.04 (completely updated) on a Fit-PC2 (specs). Read in several places that the new 3.X kernel that 12.04 runs on has a new psb_gfx driver which supports the gma500 graphics card (poulsbo chipset). All's pretty much working (there are some glitches which are documented, so I won't raise them here), except for the screen resolution. My native monitor resolution is 1920X1080, but all I get is 1024x768. Output running xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0* Although I read that Ubuntu does not come with an xorg.conf file anymore, I also tried running sudo X :1 -configure, and here's the end of the output: Number of created screens does not match number of detected devices. Configuration failed. When I look in the xorg.conf.new file created in my home directory, it seems that for some reason X thinks I have two screens. Don't know what to do with that. Ideas anyone? Thanks for your time.

    Read the article

  • Another good free utility - Campwood Software Source Monitor

    - by TATWORTH
    The Campwoood Source Monitor at http://www.campwoodsw.com/sourcemonitor.html  says in its introduction "The freeware program SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. SourceMonitor, written in C++, runs through your code at high speed, typically at least 10,000 lines of code per second." It is indeed very high-speed and is useful as it: Collects metrics in a fast, single pass through source files. Measures metrics for source code written in C++, C, C#, VB.NET, Java, Delphi, Visual Basic (VB6) or HTML. Includes method and function level metrics for C++, C, C#, VB.NET, Java, and Delphi. Offers Modified Complexity metric option. Saves metrics in checkpoints for comparison during software development projects. Displays and prints metrics in tables and charts, including Kiviat diagrams. Operates within a standard Windows GUI or inside your scripts using XML command files. Exports metrics to XML or CSV (comma-separated-value) files for further processing with other tools.

    Read the article

  • Lock statement vs Monitor.Enter method.

    - by Vokinneberg
    I suppose it is an interesting code example. We have a class, let's call it Test with Finalize method. In Main method here is two code blocks where i am using lock statement and Monitor.Enter call. Also i have two instances of class Test here. The experiment is pretty simple - nulling Test variable within locking block and try to collect it manually with GC.Collect method call. So, to see the Finilaze call i am calling GC.WaitForPendingFinalizers method. Everything is very simple as you can see. By defenition of lock statement it's opens by compiler to try{...}finally{..} block with Minitor.Enter call inside of try block and Monitor.Exit in finally block. I've tryed to implement try-finally block manually. I've expected the same behaviour in both cases. in case of using lock and in case of unsing Monitor.Enter. But, surprize, surprize - it is different as you can see below. public class Test : IDisposable { private string name; public Test(string name) { this.name = name; } ~Test() { Console.WriteLine(string.Format("Finalizing class name {0}.", name)); } } class Program { static void Main(string[] args) { var test1 = new Test("Test1"); var test2 = new Test("Tesst2"); lock (test1) { test1 = null; Console.WriteLine("Manual collect 1."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Manual collect 2."); GC.Collect(); } var lockTaken = false; System.Threading.Monitor.Enter(test2, ref lockTaken); try { test2 = null; Console.WriteLine("Manual collect 3."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Manual collect 4."); GC.Collect(); } finally { System.Threading.Monitor.Exit(test2); } Console.ReadLine(); } } Output of this example is Manual collect 1. Manual collect 2. Manual collect 3. Finalizing class name Test2. Manual collect 4. And null reference exception in last finally block because test2 is null reference. I've was surprised and disasembly my code into IL. So, here is IL dump of Main method. .entrypoint .maxstack 2 .locals init ( [0] class ConsoleApplication2.Test test1, [1] class ConsoleApplication2.Test test2, [2] bool lockTaken, [3] bool <>s__LockTaken0, [4] class ConsoleApplication2.Test CS$2$0000, [5] bool CS$4$0001) L_0000: nop L_0001: ldstr "Test1" L_0006: newobj instance void ConsoleApplication2.Test::.ctor(string) L_000b: stloc.0 L_000c: ldstr "Tesst2" L_0011: newobj instance void ConsoleApplication2.Test::.ctor(string) L_0016: stloc.1 L_0017: ldc.i4.0 L_0018: stloc.3 L_0019: ldloc.0 L_001a: dup L_001b: stloc.s CS$2$0000 L_001d: ldloca.s <>s__LockTaken0 L_001f: call void [mscorlib]System.Threading.Monitor::Enter(object, bool&) L_0024: nop L_0025: nop L_0026: ldnull L_0027: stloc.0 L_0028: ldstr "Manual collect." L_002d: call void [mscorlib]System.Console::WriteLine(string) L_0032: nop L_0033: call void [mscorlib]System.GC::Collect() L_0038: nop L_0039: call void [mscorlib]System.GC::WaitForPendingFinalizers() L_003e: nop L_003f: ldstr "Manual collect." L_0044: call void [mscorlib]System.Console::WriteLine(string) L_0049: nop L_004a: call void [mscorlib]System.GC::Collect() L_004f: nop L_0050: nop L_0051: leave.s L_0066 L_0053: ldloc.3 L_0054: ldc.i4.0 L_0055: ceq L_0057: stloc.s CS$4$0001 L_0059: ldloc.s CS$4$0001 L_005b: brtrue.s L_0065 L_005d: ldloc.s CS$2$0000 L_005f: call void [mscorlib]System.Threading.Monitor::Exit(object) L_0064: nop L_0065: endfinally L_0066: nop L_0067: ldc.i4.0 L_0068: stloc.2 L_0069: ldloc.1 L_006a: ldloca.s lockTaken L_006c: call void [mscorlib]System.Threading.Monitor::Enter(object, bool&) L_0071: nop L_0072: nop L_0073: ldnull L_0074: stloc.1 L_0075: ldstr "Manual collect." L_007a: call void [mscorlib]System.Console::WriteLine(string) L_007f: nop L_0080: call void [mscorlib]System.GC::Collect() L_0085: nop L_0086: call void [mscorlib]System.GC::WaitForPendingFinalizers() L_008b: nop L_008c: ldstr "Manual collect." L_0091: call void [mscorlib]System.Console::WriteLine(string) L_0096: nop L_0097: call void [mscorlib]System.GC::Collect() L_009c: nop L_009d: nop L_009e: leave.s L_00aa L_00a0: nop L_00a1: ldloc.1 L_00a2: call void [mscorlib]System.Threading.Monitor::Exit(object) L_00a7: nop L_00a8: nop L_00a9: endfinally L_00aa: nop L_00ab: call string [mscorlib]System.Console::ReadLine() L_00b0: pop L_00b1: ret .try L_0019 to L_0053 finally handler L_0053 to L_0066 .try L_0072 to L_00a0 finally handler L_00a0 to L_00aa I does not see any difference between lock statement and Monitor.Enter call. So, why i steel have a reference to the instance of test1 in case of lock, and object is not collected by GC, but in case of using Monitor.Enter it is collected and finilized?

    Read the article

  • Can I use all my RAM for application data?

    - by gsedej
    Hi! I have yet another question about "where is my Linux memory" Question goes: can I use cache for application data? On my laptop I have 1GB ram. Situation after some time of work: browser takes 400MB and all other apps caa 300MB (quickly summed in system monitor). System monitor says I use 90% of RAM and I have already 200MB on swap. Laptop is getting slower when I start new things (e.g. open new tab in browser or open new Nautilus window). probably putting memory on swap So there should be 1200MB (ram+swap) used but all app I see uses only 600MB. Where are other 600MB? Out of this 600MB there is 400MB real RAM. I am not copying or any other massive IO activity. I read about Linux smartly uses all ram it has using buffers and cache. So, kernel (cache) uses 300MB. What if I don't want to have disk mirrored and I want to use memory for application data (e.g. new browser tab)? I don't need 200MB of mirrored disk data, because I (for example) won't use open the same photos on data partition I just seen. So can I use all my RAM for application data? (including browser, desktop, xorg, other services). How?

    Read the article

  • nvidia-settings error: nvidia driver not in use

    - by Bastarasov
    I have an Asus with GEFORCE GT520M CUDA (Optimus) and I am running Kubuntu 12.04, 64bit. I am trying to connect an external monitor through DVI and the monitor is not detected. Nvidia settings dont show properly and each time I fire them up there is a warning message: "You do not appear to be using the NVidia X driver. Please, edit...." (you probably know it and heard of this before). I have googled a lot and I have tried some things out but no luck so far. Is there a solution which has worked for someone out there? If so, please be very specific about what I need to do since I am really not good at using the terminal and generally new to ubuntu. I can use the terminal only to copy-paste things. :) Thanks in advance to everyone! ps. Seems like some people dealt with this by fixing the Nvidia settings problem but the instructions have never been clear enough for me to be able to understand.

    Read the article

  • Connecting / disconnecting DisplayPort causes crash

    - by iGadget
    I wanted to file a bug about this using ubuntu-bug xserver-xorg-video-intel, but the system prompted my to try posting here first. So here goes :-) While the situation in Ubuntu 11.10 was still somewhat workable (see UI freezes when disconnecting DisplayPort), in 12.04 (using Unity 3D) it has gotten worse. The weird part is that during the 12.04 beta's, the situation was actually improving! I was able to successfully connect and disconnect a DisplayPort monitor without the system breaking down on me. But now with 12.04 final (with all updates), it's just plain terrible. When I now connect an external monitor using the DisplayPort connector on my HP ProBook 6550b, it only works sometimes. Most times (but not always!) the screen just goes blank and the system seems to crash (not even CTRL+ALT+F1 works anymore). Only a hard shutdown by keeping the power button pressed for several seconds and then a restart gets me out of this. I suspect the chances of the system crashing become higher as the system's uptime increases, especially when there have been one or more suspend-resume cycles (although I have also experienced this bug once from a cold boot). Disconnecting is roughly the same as with 11.10 (see issue mentioned above), with the difference that if I resume from suspend, I no longer have to do a CTRL+ALT+F1, ALT+F7 cycle to get my screen back. So what more can I try? Or should I just go ahead and file the bug anyway?

    Read the article

  • Where is my ram?

    - by gsedej
    I have 2GB installed on my machine running Ubuntu 12.04. After some time of use, I see much of my RAM used. The RAM does not free enough even though I closed all my programs. I included 2 screenshots. First is "Gnome system monitor" (all process) and second is "htop" (with sudo), both sorted by memory usage. From both you see, that it's not possible that all running apps takes together 1GB of memory. First 7 biggest programs sum 250, but others are much smaller (all can't sum even 100MB). The cache is 300MB (yellow ||| on htop) and is not included in 1GB used. Also 260MB is already on swap. (which actually makes 1,3GB of used memory) If i start Firefox (or Chrome) with many tabs, it has only 1GB available and not potentially 1,5 GB (assume 0,5GB is for system). When I need more ram, swapping happens. So where is my ram? Which program is using it? How can i free it, to be available for e.g. Firefox? Gnome system monitor htop

    Read the article

  • Nvidia Fullscreen Metanode "Sliding" Issue

    - by user68202
    i have 2 monitors, the left one is my "main" monitor with 1920x1080_120, the right one my second with 1680x1050_60. (i have a nvidia card setup with twinview) when i play a game or something in fullscreen mode, the full resolution is used in fullscreen (monitor 1 + monitor 2). i read something about the metanodes i can use to shut down the one monitor that i dont need durning a "fullscreen session". i used the following: Option "metamodes" "DFP-0: 1920x1080_120 +0+0, DFP-2: 1680x1050_60 +1920+0; DFP: 1920x1080_120 +0+0, NULL" Its working great, the second (right) monitor is shutting down when i press "CTRL ALT +" and is starting agin when i press the same keystroke. But in the second mode when the second monitor is "down", i got the full "monitor 1 + monitor 2" resolution on my first (left) monitor, i can move my mouse to the right to see the contents of the second monitor and move it again to the left to the what is normally seen on the first monitor. Its something sliding between the 2 monitors on one display. How can i avoid this?

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Set one background to stretch across multi-monitor display?

    - by John Isaacks
    I have two monitors and my coworker has three. We were wondering if there was a way to set a single background image to stretch across all the monitors. Right now it's the same image but repeated, so it looks like the same image three times in a row. I would like it to be so that the three images/screens combine to show one image if that makes sense. If I can't set one image to stretch across the screens like that, is there a way to set a different background for each screen? Edit: I just wanted to clarify: I am not talking about one image stretching from his monitors to mine, just stretching across our own monitors.

    Read the article

  • Dual monitors on Windows - How do I set a different DPI or text size on each monitor?

    - by dlux
    My laptop is a 15" wide screen running at 1600x1050, and in addition to that I connect an external 19" LCD which runs at 1280x1024. The problem with this setup is that if I increase the text size to make the laptop screen readable, the text on the external LCD is huge. Normal text on the LCD results in tiny text on the laptop. What options do I have to get around this? I'm using Windows 7 and the laptop is a ThinkPad T61 with an nVidia NVS 140M video chipset. I cannot find any per-display setting in Windows or the nVidia control panel to resolve this.

    Read the article

  • One Apache server, multiple clients - best practices for config files?

    - by OttaSean
    First time user; please be gentle. :-) (And if you don't like my question I'd be grateful for a comment as to why...) I am doing a contract at a government server shop that provides web services for multiple client groups in other areas of the government. My employer has asked me to look into how other shops, in similar situations, handle configuration files, and whether there are any best practices on the subject. I'm pretty sure there are lots of installations out there running multiple VirtualHosts out of one Apache installation, but surprisingly I couldn't find anything online about how people handle config file layout, so was hoping some of you wise folks on ServerFault might have some thoughts or pointers for me. The current setup - which seems logical to me - is that each client site has its own directory off the root - so: /client/tps-reports/ /client/silly-walks/ /client/ministry-of-magic/ and so on - and each of those directories has a /htdocs, /cgi-bin, and /conf (among others). The main /etc/apache/httpd.conf only contains Include statements (and lots of comments), the last of which is: Include /etc/apache/vhosts/*.conf The vhosts directory contains symlinks: tpsrept.conf - /client/tps-reports/conf/tpsrept.conf sillywk.conf - /client/silly-walks/conf/sillywk.conf mom.conf - /client/ministry-of-magic/mom.conf Each of those .conf files contains the actual NameVirtualHost definition and a gigantic <VirtualHost 192.168.12.34> stanza - which contains all the stuff about the specific site. The idea is that clients have access to what's in their own /client/xx directory, so they can change stuff in the section of the config that is relevant to them. As I mentioned above, that seems fairly logical to me, but I'm wondering if any of you wise folks are aware of potential gotchas with this sort of layout, or any other thoughts on why it is or isn't a good idea. In particular, how do other places do it? Is there a "best practice" for this sort of thing? Many thanks in advance for your time and any thoughts you all might have.

    Read the article

  • Unity 3D (with Nvidia driver) becomes very slow and laggy

    - by Graham
    How can I prevent my Unity 3D desktop from becoming slow after a while, given that I have an Nvidia Quadro NVS 290 graphics in TwinView mode? The desktop starts out fast on login, but becomes slow / lagging / hesitant / high latency after a while, symptoms being spikes in CPU usage by /usr/bin/X whenever I cause any graphical activity with the mouse or keyboard (e.g. typing, changing tabs, dragging windows). The desktop remains slow even with all windows (except htop in Terminal) and extraneous processes killed. Detail: Changing tabs in Terminal takes about a second, and X spikes to 76% CPU. As I type into Firefox, X spikes to 95% CPU. Dragging Termiinal window, X goes to 70% CPU. Basically, every graphical action sends CPU usage of X through the roof. Device: Nvidia Quadro NVS 290 Driver package: binary driver nvidia-current-updates (280.13-0ubuntu5) Dual Monitors: Pair of DELL UltraSharp 1908FP in TwinView (X screen 2560x1024) OS: Fresh install of Ubuntu 11.10 amd64 Desktop with all updates. Hardware: Dell Precision T5400 Workstation Pastebin of Xorg.0.log Pastebin of xorg.conf Pastebin of nvidia-xconfig -t output (easier to read than xorg.conf) Output of /usr/lib/nux/unity_support_test -p: To obtain the following htop screenshow I typed "asdf" several times in in this text box, alt-tabbed to Terminal and took a screenshot of the high X CPU usage. This also happens when firefox is not running: Quadro NVS 290 has "No" thermal sensor according to sensors-detect: Next adapter: NVIDIA i2c adapter 0 at 2:00.0 (i2c-0) Do you want to scan it? (YES/no/selectively): Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No P robing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) I tried the nouveau driver by disabling the nvidia-current-updates under Additional Drivers, but Ubuntu and xrandr -q fail to detect the second monitor. This may be issue 737349. Funniest thing is that Nouveau wiki says that XRandR 1.2 dual-monitor is supported so it should work with a second monitor.

    Read the article

  • 12.10 Quantal display issues using nvidiaXineramaInfoOverride

    - by AvatarKava
    After updating to 12.10 today, my xorg.conf doesn't seem to be respected by Quantal. Not sure if this is a 'bug' or whether it's just an adjustment I have to make due to changes in the OS. When logging in, it seems Ubuntu is now recognizing only one 3840x1080 screen named "Matrox" and maximizing windows spans them across both screens. In 12.04, this configuration file successfully allowed me to override the data provided by my TripleHead2Go and maximize windows to a single monitor. Any ideas or where to start on trying to debug this? After a bit of searching, I tried to make changes according to the update here: http://www.phoronix.com/scan.php?page=news_item&px=MTEyMDk Here's where the config file sits currently: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Matrox" HorizSync 31.5 - 80.0 VertRefresh 59.9 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 260M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "nvidiaXineramaInfo" "true" Option "nvidiaXineramaInfoOrder" "CRT-0" #Option "metamodes" "CRT: nvidia-auto-select +0+0" Option "nvidiaXineramaInfoOverride" "1920x1080 +0+0, 1920x1080 +1920+0" Option "Stereo" "0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • An XEvent a Day (11 of 31) – Targets Week – Using Multiple Targets to Debug Orphaned Transactions

    - by Jonathan Kehayias
    Yesterday’s blog post Targets Week – etw_classic_sync_target covered the ETW integration that is built into Extended Events and how the etw_classic_sync_target can be used in conjunction with other ETW traces to provide troubleshooting at a level previously not possible with SQL Server. In today’s post we’ll look at how to use multiple targets to simplify analysis of Event collection. Why Multiple Targets? You might ask why you would want to use multiple Targets in an Event Session with Extended...(read more)

    Read the article

  • MacBook Pro + OSX Lion + Samsung SA550 HDMI not playing nicely

    - by rabbid
    I bought a Samsung SA550 monitor as a second monitor for my 2009 MacBook Pro that is running OSX 10.7 Lion. The monitor has 2 outputs, VGA and HDMI. If I connect using VGA everything is fine. If I connect using HDMI-DVI cable, and DVI-to-mini display converter I get static and flickering: The DVI-to-mini display converter that I use: I have used this converter with a different monitor that had a DVI output port a while ago, so at least it used to work once upon a time. I am not sure if this problem is because of that converter or not. I have never used a monitor with an HDMI output before. Appreciate your help. Thank you!

    Read the article

  • CSS and HTML incoherences when declaring multiple classes

    - by Cesco
    I'm learning CSS "seriously" for the first time, but I found the way you deal with multiple CSS classes in CSS and HTML quite incoherent. For example I learned that if I want to declare multiple CSS classes with a common style applied to them, I have to write: .style1, .style2, .style3 { color: red; } Then, if I have to declare an HTML tag that has multiple classes applied to it, I have to write: <div class="style1 style2 style3"></div> And I'm asking why? From my personal point of view it would be more coherent if both could be declared by using a comma to separate each class, or if both could be declared using a space; after all IMHO we're still talking about multiple classes, in both CSS and HTML. I think that it would make more sense if I could write this to declare a div with multiple classes applied: <div class="style1, style2, style3"></div> Am I'm missing something important? Could you explain me if there's a valid reason behind these two different syntaxes?

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • Good practices while working with multiple game engines, porting a game to a new engine

    - by Mahbubur R Aaman
    I have to work with multiple game engines, like Cocos2d Unity3d Galaxy While working with multiple game engines, what practices should i follow? EDIT: Is there any guideline to follow, that would be better as while any one working with multiple game engines? EDIT: While a game made by Cocos2d and done well at AppStore, then our target it to port to other platforms, then we utilize Unity3D. Here what should we do?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >