Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 58/971 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Using Process Monitor to track registry changes

    - by CChriss
    It seems many people like using Process Monitor to see what changes are being made to the registry during a process. So I downloaded it. I want to see what changes are made in the registry by some config changes I'm making on my computer so I can write them into a vbs script to do them easily. Can someone tell me how to drive Process Monitor to capture the info? In the Help I don't see how to do it. I'm using Windows 7 home Premium 64 bit.

    Read the article

  • Personal Browsing Monitor Software [closed]

    - by jmadden93
    Anyone know of any personal browsing monitor software? I'd like to be able to monitor my own browsing habbits and the time I spend on entertainment, vs work vs educational sites. Something that offers more than simply looking at the history feature built into browsers. It would be nice if it gave you a breakdown of how much time you spend on certain categories of sites like social media, vs video, vs. news, productivity, etc. I think it would be useful to know how one spends their time.

    Read the article

  • DELL U2410 Monitor(Display Port) & Graphic card

    - by Anj
    I was looking for a 24" monitor and bought the Dell 24" inch ultra sharp monitor which has DisplayPort( new keyword for me) seems it is equivalent to HDMI. Now I would like to enhance the display capabilities for my laptop and desktop(both have VGA output as of now). Couple of questions in mind: Is there a single HD graphic card which i could use for both desktop and laptop? I understand it has to be external but if its costly then I would stick on internal graphic card for my desktop. Please recommend external if its cost effective else I would go for internal one(Budget is around $70 or Rs 4000. Is there a HD card which I could use for both HD video and 5.1 channel audio output? I generally use the computer for office work/ listening music and watching movies, not gaming.

    Read the article

  • Remote Desktop AND monitor fail on restart (Win2008R2)

    - by Wesley
    I am in the process of building a small 3 server farm. Each machine is running Window Server 2008 R2. As is normal, I am in the process of installing patch after patch to bring the machine up to snuff. Every time I restart the machine, or most every time, when I try to remote in to the machine I get the Log In window, but then almost immediately I get the message that my remote session was ended. If I physically walk over to the machine and plug in a monitor and keyboard, I see nothing. If I leave the keyboard and monitor in and restart the machine by force, the computer reboots just fine. When windows starts, I get no error message about windows not starting or being shut off unexpectedly. Once I log into the machine physically by the keyboard, I can then remote in to the machine at that point. Very confused. This happens on all 3 machines, these machines have different hardware.

    Read the article

  • Monitor disk I/O for specific drive in OS X

    - by raffi
    In my Macbook Pro, I have two internal drives and I've connected a third drive via USB in enclosure. I am currently doing a secure wipe of the external drive and I was interested in seeing what the disk I/O was for that particular drive, but when I use Activity Monitor I only see the total disk usage for all drives combined. Is there any way to monitor a specific drive's total I/O, preferably via a built-in or free method? I don't want to filter by process ID. I just want to filter by mounted disk.

    Read the article

  • Using external monitor and screen resolution?

    - by Johnydep
    it might sound very basic question but i need some help. I have lenovo Ideapad y560p which offers a maximum resolution of 1366px x 768px. Now i am planning to buy an external monitor, and i short listed some FULL HD resolution monitors, but it is my understanding that even with FULL HD, i would only achieve the best resolution my laptop supports. Is it true? And also if this is the case, should i really go for an HD monitor or settle down with a lower resolution version? 2ndly, is there any way to increase resolution with simple upgrade? or it is not possible without voiding the warranty??

    Read the article

  • Recommend a desktop app to monitor port 80 on servers

    - by busyone
    Hello, I did some googling but was unsuccessful, so I am posting here. I am looking to get an app (preferably free/open source) to monitor my servers on port 80 that would send me an email/text when some predefined timeout triggers. I could probably write something in VB to do this but i am burried with projects, why reinvent the wheel. I know there are services out there that want you to pay $5/month to do so, I was thinking about a Windows app that would sit on my computer and simply monitor predefined list of IPs on port 80. Thanks!

    Read the article

  • How to indicate to user that a command affects a subset of a multiple selection?

    - by Zamboni
    Here is an example that illustrates my question. I have a program that lists 1000 items. I select 10 of 1000 items. The program enables a button indicating that a command is available for my selection. I click the button, and a window appears. I make some change in the window and click OK. The command changes 5 of the 10 items in my multiple selection, and those 5 changed items now reflect a modified state in my list. My question is: How do I indicate to user that the command affects a subset of a multiple selection before clicking OK? Can anyone cite examples of existing products that handle this scenario well?

    Read the article

  • jquery - how is multiple selection working in this example?

    - by hatorade
    The relevant snippet of HTML: <span class="a"> <div class="fieldname">Question 1</div> <input type="text" value="" name="q1" /> </span> The relevant jQuery: $.each($('.a'), function(){ $thisField = $('.fieldname', $(this)); }); What exactly is being set to $thisField? If my understanding of multiple selectors in jQuery is correct, it should be grabbing the outer <span> element AND the inner <div> element. But for some reason, if I use $thisField.prepend("hi"); it ends up putting hi right before the text Question 1, but not before <div>. I thought multiple selectors would grab both elements, and that prepend() adds hi to the beginning of BOTH elements, not just the <div>

    Read the article

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • Script to monitor free space in Hdd

    - by s.mihai
    I have a server app that crushes when the HDD free space it's a multiple of 4Gb (on a Windows Server 2003). In general i keep track myself o that weekly since i use the machine from time to time. Can you point out an app or script (i don't wanna install powershell, is this doable???) that copies some larger files from one folder to another to get the free space out of the multiple of 4Gb range. Best regards, Mike

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Open Multiple Sites Without Reopening the Menus in Firefox

    - by Asian Angel
    Are you frustrated with having to reopen your menus for each website that you need or want to view? Now you can keep those menus open while opening multiple websites with the Stay-Open Menu extension for Firefox. Stay-Open Menu in Action You can start using the extension as soon as you have installed it…simply access your favorite links in the “Bookmarks Menu, Bookmarks Toolbar, Awesome Bar, or History Menu” and middle click on the appropriate entries. Here you can see our browser opening the Productive Geek website and that the “Bookmarks Menu” is still open. As soon as you left click on a link or click outside the menus they will close normally like before. Note: Middle clicked links open in new tabs. The only time during our tests that a newly opened link “remained in the background” was for any links opened from the “Awesome Bar”. But as soon as the “Awesome Bar” was closed the new tabs automatically focused to the front. A link being opened from the “History Menu”…still open while the webpage is loading. Options The options are simple to sort through…enable or disable the additional “stay open” functions and enable automatic menu closing if desired. Conclusion If you get frustrated with having to reopen menus to access multiple webpages at one time then you might want to give this extension a try. Links Download the Stay-Open Menu extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Use Multiple Rows of TabsDisable Web Site Window Resizing in FirefoxQuick Hits: 11 Firefox Tab How-TosPrevent Annoying Websites From Messing With the Right-Click Menu in FirefoxJatecblog Moves to How-To Geek Blogs (Linux Readers Should Subscribe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Using the Onboard VGA output with a PCIe video card. Both nVidia

    - by sebikul
    I have 2 video cards, one On board, a nVidia 6150SE nForce 430 and a PCIe nVidia GeForce GT 220 1GB DDR2 RAM I have already configured the PCIe card to use the dual monitor feature, using the VGA and HDMI ports, but now I want to add a third monitor, using the On board VGA port I have managed to enable the On board graphics processor, which is taking 400MB of ram, but I cant manage to use it, nvidia-settings does not detect it, like it's not usable (but is there) My questions are the following: How can I manage to get the On board VGA display to work together with the PCIe graphics card? If possible, how can I recover those 400 MB the on board card is taking (even without being used) or how can I get it to use the PCIe card available memory? System Details: Linux 2.6.35-28-generic i686 Ubuntu 10.10 (All updates installed) NVIDIA Driver Version: 260.19.06 (Official) If more info is needed please let me know. Here is the lspci output when the On board card is disabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) And this is when both are enabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) Output of lshw -class display: *-display description: VGA compatible controller product: GT216 [GeForce GT 220] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:02:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:df000000-dfffffff memory:c0000000-cfffffff memory:da000000-dbffffff ioport:ef80(size=128) memory:def80000-deffffff *-display description: VGA compatible controller product: C61 [GeForce 6150SE nForce 430] vendor: nVidia Corporation physical id: d bus info: pci@0000:00:0d.0 version: a2 width: 64 bits clock: 66MHz capabilities: pm msi vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:22 memory:dd000000-ddffffff memory:b0000000-bfffffff memory:dc000000-dcffffff memory:deb40000-deb5ffff If what I'm looking for is not possible, please tell me, so I can disable the On board card and recover those 400MB of wasted RAM Thanks for your help!

    Read the article

  • Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”

    - by Kavitha
    Want to add watermark to your PDF files with a single click? You can use the freeware Batch PDF Watermark. Batch PDF Watermark is super cool application that lets you add image or text watermarks to multiple files at a time. Office 2010 style ribbon user interface of the application is very easy to use and provides many options to configure watermark properties like – font styles, positioning, transparency levels, rotation of watermark image, scaling of watermark image and etc. Before running the watermark process, you can even preview it. To select multiple PDF files to watermark you can use “Add Files” option to hand pick required files or “Add Folder” option to choose all the PDF files available in the folder. Download Batch PDF Watermark [via liferocks] This article titled,Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • no dual screens with 11.10 and Asus m4A89 GTD Pro

    - by Alex
    I'm having an issue getting dual monitors working for Kubuntu 11.10. I have Asus m4A89 GTD pro/USB3 mother board with integrated Ati HD4290 graphics chip. When I try to enable multiple monitors through the system settings, it says "This module is only for configuring systems with a single desktop spread across multiple monitors. You do not appear to have this configuration." I had previously attempted to fix this problem with another installation of Ubuntu 11.10, but ended up having to reinstall ubuntu because i messed up the software center dependencies. After I installed Ubuntu the first time, a notification showed up asking me to install an Ati graphics driver. I installed this driver, then restarted, and dual monitors did not work. That was when I went to the ATI site and attempted to install the fglrx driver. When I tried to run the shell script for the fglrx driver, it said i had a previous version of an fglrx driver installed, and needed to remove it in order to install the new one. So I looked up some tutorial on how to remove it and found some apt-get remove command, which i ran. Then I was able to install the new driver. Dual monitors still did not work, and i couldn't use the software center any more because it was corrupted and was unable to repair itself. So i just reinstalled ubuntu, and now i'm trying to go about this the correct way. Does anyone have this same configuration and which driver works for you?

    Read the article

  • Help with complex MVVM (multiple views)

    - by jsjslim
    I need help creating view models for the following scenario: Deep, hierarchical data Multiple views for the same set of data Each view is a single, dynamically-changing view, based on the active selection Depending on the value of a property, display different types of tabs in a tab control My questions: Should I create a view-model representation for each view (VM1, VM2, etc)? 1. Yes: a. Should I model the entire hierarchical relationship? (ie, SubVM1, HouseVM1, RoomVM1) b. How do I keep all hierarchies in sync? (e.g, adding/removing nodes) 2. No: a. Do I use a huge, single view model that caters for all views? Here's an example of a single view Figure 1: Multiple views updated based on active room. Notice Tab control Figure 2: Different active room. Multiple views updated. Tab control items changed based on object's property. Figure 3: Different selection type. Entire view changes

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Is using multiple canvas objects a good practice?

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: We have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practice? Having multiple canvas to draw? Each game loop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it every time and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive...

    Read the article

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >