Search Results

Search found 14216 results on 569 pages for 'nvidia settings'.

Page 83/569 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • What are the default mount settings for mount / fstab?

    - by John Craick
    What are the default mounting options for a non root partition ? The man entry for mount says ... defaults - use default options: rw, suid, dev, exec, auto, nouser, and async. ... so that might be what we expect to see. But, unless I'm missing something, that's not what happens. I have an ext3 partition labelled "NewHome20G" which is seen as /dev/sdc6 by the system. This we can see from ... root@john-pc1204:~# blkid | grep NewHome20G /dev/sdc6: LABEL="NewHome20G" UUID="d024bad5-906c-46c0-b7d4-812daf2c9628" TYPE="ext3" I have an entry in fstab as follows ... root@john-pc1204:~# cat /etc/fstab | grep NewHome LABEL=NewHome20G /media/NewHome20G ext3 rw,nosuid,nodev,exec,users 0 2 Note the option settings that are specified in that fstab line. Now I look at how the partition is actually mounted after boot up ... root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] ... so, when the filesystem gets mounted the exec & users options I specified seem to have been ignored. Just to be sure, I unmount sdc6, remount it and look at the mount options again ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] .... same result Now I unmount the partition again, remount it specifying the exec option and look at the result ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o exec root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,nosuid,nodev) [NewHome20G] ... and here the exec option has finally taken effect and the noexec setting has vanished. Just for interest, I re-mount the partition with the defaults option root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o defaults root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] The noexec is back, so it looks very like rw,noexec,nosuid,nodev are the default options which is NOT what man says. Why does this matter ? I have a folder full of useful scripts stored on a data disk. Because that disk is mounted noexec those scripts won't run, even though they have all been set with chmod 777. I can work round this in several ways but it's disappointing that the man entry seems to be wrong. Have I missed something obvious here or have the default options in Ubuntu changed from what they were a few versions ago ?

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • Nautilus ignores / misinterprets view size

    - by BlueZero4
    I noticed that a lot of my folders had suddenly switched to higher view sizes than I had specificied. I was assuming that somehow nautilus had suddenly decided to create per-folder entries for said folders with incorrect view sizes. So I found this question: How to reset all per-folder view settings in nautilus? I found the folder specified in the answer (~/.local/share/gvfs-metadata) and found that it was actually important to delete the files INSIDE the folder, because for some reason deleting the folder itself didn't work for some reason. After doing that, I discovered that the odd setting was for the default view settings, not for a handful of files. Nautilus actually handles the per-folder settings like it should, but it ignores the global folder settings. I want Nautilus to, by default, display all non-specified folders as compact view, 50%. My folders are using the compact setting like I want, but they are not down to 50%. At a guess, they are at 100%. Altering the view size of the icon view can set the compact view to 33%, but I'm not sure by what mechanism this functions. I haven't extensively tested the other view sizes because I don't plan on using them much at all. Next I looked up questions like How do I reset nautilus to the default configuration? I'm expecting the problem to be a corrupted config file or something of the sort, so I hunted down directories like ~/.nautilus, ~/.gconf/apps/nautilus, and ~/.gnome2/nautilus. (I don't have a ~/.nautilus directory, so I'm assuming that's only for older versions.) I attempted to remove the contents of each, but I can't seem to force Nautilus back to default configuration settings. Actually viewing Nautilus's preferences in GConf made the settings look like they were what I wanted them to be, which is odd. I'd like to force Nautilus to default settings, basically. Though if something else will fix it, I'll take it too. I'm not interested in doing a full uninstall, reinstall of Nautilus if I don't have to. ==EDIT1== Turns out that Nautilus just writes the settings in GConf for the heck of it. Nautilus only really uses the settings that it stores in DConf. I did gsettings reset-recursively org.gnome.nautilus, which actually did reset Nautilus to default, but it still doesn't like my view size settings.

    Read the article

  • How do I set a custom resolution in 12.04 with a nVidia video card (9600 GSO)?

    - by Aaron Agarunov
    I have 32 bit 12.04 and my video card drivers are up to date at "304.64" yet my resolution appears capped at 1920x1080. I am trying to get the resolution to 2560x1440 or even higher, as I am running this machine on a 42" LCD through HDMI and the 1920x1080 resolution will not stretch to fit the screen and is therefore fairly zoomed in. The 9600 GSO supports up to 2560x1600, so this should be no problem for the card itself. I have tried using xrandr, which successfully creates the 2560x1440 60 hertz mode but does not allow me to --addmode or --output it in. I have tried working with the xorg.conf, but I actually can not find a way to create the file since when I try to, I am given an error message stating that the # of monitors I have does not match the # of screens I have. Can anyone provide some help or insight?

    Read the article

  • Why does installing NVidia 9600GT graphics card, take 1GB of RAM away from Windows?

    - by Nick G
    Hi, I've changed graphics cards in my PC and now Windows 7 (32bit) is reporting that I have a whole gigabyte less physical RAM in my PC. Why is this? Firstly, the machine has 4GB of physical RAM. The old card was an ATI 2600XT with 256MB and the new card is an NVidia 9600GT with 512MB. With the ATI card windows sees 3326MB. With the NVidia card, windows sees 2558MB. I realise that due to address space restrictions I will not see all 4GB with 32bit windows, but why is there such a massive loss of RAM when simply changing cards (bearing in mind BOTH cards have their own RAM and borrow no main memory like some built on chipsets do). Would using 64 bit windows solve this? Thanks Nick.

    Read the article

  • How to activate nVidia cards programmatically on new MacBookPros for CUDA programming?

    - by Carsten Kuckuk
    The new MacBookPros come with two graphic adapters, the Intel HD Graphics, and the NVIDIA GeForce GT 330M. OS X switches back and forth between them, depending on the workload, detection of an external monitor, or activation of Rosetta. I want to get my feet wet with CUDA programming, and unfortunately the CUDA SDK doesn't seem to take care of this back-and-forth switching. When Intel is active, no CUDA device gets detected, and when the NVidia card is active, it gets detected. So my current work-around is to use the little tool gfxCardStatus (http://codykrieger.com/gfxCardStatus/) to force the card on or off, just as I need it, but that's not satisfactory. Does anybody here know what the Apple-blessed, Apple-recommended way is to (1) detect the presence of a CUDA card, (2) to activate this card when present?

    Read the article

  • What could be causing windows to fail to sleep or hibernate, and revert system settings?

    - by xdumaine
    I am running Windows 8.1 on my system (Dell e6520) and the symptoms are this: My PC won't sleep or hibernate. It the screen turns off, but the fans just run and run, and after a little while, it completely shuts down, and when I start it back up, everything is closed, like it was a fresh boot/restart. I'm have a weird issue with outlook described here and the fix works fine - modifying a registry value. The weirder part is that after my computer fails to sleep or hibernate, and it starts back up, the registry value is GONE, like I never modified it, and thus the outlook error message is back. I thought maybe a graphics driver was preventing the sleep/hibernate, so I attempted to uninstall the NVIDIA graphics driver and control panel. HOWEVER, once the computer fails to sleep/hibernate, the NVIDIA graphics driver and control panel appear BACK on my system, like I didn't uninstall them. What could be happening here? I really need to be able to sleep/hibernate so I don't lose work or my work state, and these issues are really concerning. What I've tried, without success: Uninstalling graphics driver mentioned above Disabling hibernate and using sleep disabling/re-enabling hibernate Disabling startup items Sleeping as local system admin account disconnecting all USB, network, and bluetooth devices

    Read the article

  • Windows 7 Change internet time settings tells me I have no permissions.

    - by Matthias Vance
    LS, While trying to solve my computer clock always running ahead (even when on, not just on every boot), I apparently broke some security settings. All I did (as far as I can remember) was stop and start the w32time service. Now, whenever I go to the "Internet time" tab, and click "Change settings..." Windows tells me I don't have permissions to do so. Facts I am a member of the Administrators group. In gpedit.msc, I checked that the Administrators group is allowed to change the system time. Kind regards, Matthias Vance

    Read the article

  • How to transfer files and settings from Windows 7 x64 to Windows 2008 R2?

    - by Mohamed Meligy
    If I want to re-install Windows 7 Enterprise 64 bit (or any other edition of Windows 7), I'd typically use "Windows Easy Transfer" utility built in the OS to backup and restore my files and settings. But in my case, I'm migrating to Windows Server 2008 R2. If I remember well -having worked on both Windows 2008/2008R2 before- "Windows Easy Transfer" is NOT installed on Windows server, and it doesn't even understand the format of the backup file it generates (".MIG" file). I can't remember for sure whether this is true, is it? And if it's true, what is the alternative for transferring the files and more importantly program settings to Windows 2008 R2? Of course I'm aware of the "manual" option and that automatic transfer sure will not transfer everything. Options??

    Read the article

  • How can I recover Google Chrome extensions, settings after a Linux crash?

    - by Norman Ramsey
    I'm running Google Chrome 5.0.307.11 (Official Build 39572) beta on Debian Linux (lenny) kernel version 2.6.26-2-686. The machine is a laptop (Lenovo Thinkpad X300) and sometimes it freezes, usually shortly after wakeup from sleep. The only cure is the power button, but when I restart my Google Chrome web browser after such an event, Settings on the Options menu revert to defaults. Chrome removes all the extensions from ~/.config/google-chrome/Default/Extensions, leaving me with no extensions. The set of "pages last open" is lost. I'd love to know how to poke at the file system in order to recover any or all of this information, especially my extensions. It is a pain to re-do everything by hand each time. How can I recover Google Chrome's extensions and settings after a Linux crash?

    Read the article

  • django subdomains apache config

    - by bocca
    Trying to setup django to take subdomains and use different settings file based on subdomain dns is setup as wildcards apache mod_vhost_alias sounded like good option, it did not work SetEnv DJANGO_SETTINGS_MODULE site.settings.%1 gets this error ImportError: Could not import settings 'site.settings.%1' can mod_vhost_alias be made to work somehow? can rewrite module be used to pass along subdomain to django settings? is there any other way to pick up different settings files based on subdomain

    Read the article

  • Need to hookup HP dv7-3085dx with Nvidia GeForce GT 230M to my Dell 30 inch LCD 3007WFP at max resol

    - by user14660
    I recently bought an HP laptop (dv7-3085dx) (http://reviews.cnet.com/laptops/hp-pavilion-dv7-3085dx/4505-3121_7-33776108.html) which is supposed to have a pretty good video card (NVIDIA GeForce GT 230M). The card is supposed to output a max resolution of 2560x1600 which is also the max resolution of my monitor (http://www.ubergizmo.com/15/archives/2006/02/dell_3007wfp_on_dell_2001fp_action_8_megapixel_desktop.html). Now I bought an HDMI to dual link dvi (http://www.amazon.com/gp/product/B002KKLYDK/ref=oss_product) cable...this is after Best Buy's 70 dollar hdmi to dvi (perhaps it was 'single' link?) didn't give me the best resolution. In windows 7, when I try to set the max resolution for my 30 in monitor, I only get 1280x800...which is absurd. The monitor is great, I love the laptop and the video card supposedly supports such resolutions. So I can't figure out why I'm not getting better resolution (by the way, when i "detect" my monitor in windows 7, it is shown correctly as DELL 3007WFP!).

    Read the article

  • How to prevent wifi router from broadcasting multiple ssids?

    - by user1092288
    I have the following Wi-Fi router: http://www.beetel.in/business-solutions/international-business/adsl/450-four-port-wifi-modem And I am also posting screen shots of ISP settings (from 192.168.1.1). Problem is, my Wi-Fi router is broadcasting multiple SSIDs (even the SSIDs which I used in the past and aren't entered in settings at present). How do I resolve this to broadcast single SSID? I have already tried factory settings restore. SSID1 Settings SSID2 Settings SSID2 Settings- greyed-out, unable to change anything

    Read the article

  • Why can't connect with second computer in same LAN and settings?

    - by user930450
    I'm trying to connect to WLAN with notebook. The notebook works fine with other WLANs. It can authenticate, signal is "very good" but it says "can't access internet". (On Windows it's small yellow exclamation mark on the signal). With other computer exactly in the same location, with the same settings, it's possible to connect. Both are configurated to get IP dynamically. One difference is that the other computer is using "Ralink wireless" instead normal windows client to connect. But does this make a difference? the settings are the same. What could be the reason?

    Read the article

  • What settings to use for fastest reloading of a MySQL backup?

    - by Alex R
    I have a MySQL database which dumps to a 3.5 GB backup (mysqldump) in about 10 minutes. But reloading this backup on a standby / test server takes upwards of 12 hours. What are some settings that would maximize reloading performance? The most promising appear to be innodb_buffer_pool_size, innodb_additional_mem_pool_size, and innodb_log_buffer_size... but I'm reaching the limits of my trial-and-error approach. Which of these settings "should" be the most important? Through trial-and-error I was not able to get more than 70% CPU utilization and 63% memory utilization. I'd like both at 100% during a reload. All tables are InnoDB.

    Read the article

  • Best FFDShow settings for upscaling SD content to 1080p HD?

    - by Richard
    Hi there, I'm running Windows Media Center 7 with ffdshow-tryouts for the decoding of many of the popular video formats. It works great. I've now upgraded my television from SD to 1080p HD and, naturally, I've still got a large number of existing MP4/XviD/DivX items of content which is in SD. I'd like, therefore, to modify the settings of ffdshow so that they are upscaled to 1080p as best as possible. I appreciate that they won't be as good as their HD equivalent - but on the flip side, I'm pretty certain I can do more than just resizing the picture to get the best possible output. Can anyone recommend the best settings in ffdshow to do this? For example, should I apply a sharpen mask? Or Noise Reduction? Or Deinterlace? Alternatively, would it be better to keep them at their current resolution and let the TV (Samsung Series 5 LE32C580) do the upscaling? Thanks.

    Read the article

  • How to make VIM settings computer-dependent in .vimrc?

    - by Paperflyer
    I share my VIM configuration file between several computers. However, I want some settings to be specific for certain computers. For example, font sizes on the high-res laptop should be different to the low-res desktop. And more importantly, I want gVIM on Windows to behave more windowsy and MacVim on OSX to behave more maccy and gVIM on Linux to just behave like it always does. (That might be a strange sentiment, but I am very used to switch mental modes when switching OSes) Is there a way to make a few settings in the .vimrc machine- or OS-dependent?

    Read the article

  • How can I let my users set PHP.ini settings for wordpress?

    - by jldugger
    I set up a wordpress server from a fairly standard Ubuntu 9.10 for a class and they're constantly running into problems with the default PHP.ini settings. First memory settings were too low, then the file upload limits were too small, etc. And more concerning was a wordpress wide blank page that I suspect was killed for ram consumption but turning on php errors in php.ini didn't reveal anything! I'm not familiar with shared hosting, but I feel there's a way such places allow users to edit such things without needing me to intervene and restart Apache.

    Read the article

  • Connecting Dell XPS 17/HP Pavilion dv7 (nvidia GT 650M) to Apple LED Cinema Display for 2560x1440 resolution from notebook

    - by alphaTrader
    Is there any way to run higher than 1080p resolution from a Dell XPS 15/17 or HP Pavilion laptop? Specifically, I am planning to buy Dell XPS L721X or HP Pavilion dv7-7005tx with nvidia GeForce GT 650M and connect it to the Apple LED Cinema display via mini Displayport (I don't think Thunderbolt is supported on these notebooks). The idea is to get 1080p on the notebook and 2560x1440 on the main display. Only one of the montitors is active at any time. I asked Dell and they weren't of much help.

    Read the article

  • How get OSX Lion to save Modifier Key Settings (i.e. swap Ctrl and Cmd)

    - by Huliax
    I use Lion at work with an MS Natural Ergonomic Keyboard 4000. Every single time I log in I have to go into settings and swap the command and control keys. This is really annoying. Is there a way to get these settings to stick? Beyond that, I'd also like to remap a few other keys and I'm interested in tools for doing that. I think I need to work out the first issue first though. Thanks for any help.

    Read the article

  • Jump handling and gravity

    - by sprawl
    I'm new to game development and am looking for some help on improving my jump handling for a simple side scrolling game I've made. I would like to make the jump last longer if the key is held down for the full length of the jump, otherwise if the key is tapped, make the jump not as long. Currently, how I'm handling the jumping is the following: Player.prototype.jump = function () { // Player pressed jump key if (this.isJumping === true) { // Set sprite to jump state this.settings.slice = 250; if (this.isFalling === true) { // Player let go of jump key, increase rate of fall this.settings.y -= this.velocity; this.velocity -= this.settings.gravity * 2; } else { // Player is holding down jump key this.settings.y -= this.velocity; this.velocity -= this.settings.gravity; } } if (this.settings.y >= 240) { // Player is on the ground this.isJumping = false; this.isFalling = false; this.velocity = this.settings.maxVelocity; this.settings.y = 240; } } I'm setting isJumping on keydown and isFalling on keyup. While it works okay for simple use, I'm looking for a better way handle jumping and gravity. It's a bit buggy if the gravity is increased (which is why I had to put the last y setting in the last if condition in there) on keyup, so I'd like to know a better way to do it. Where are some resources I could look at that would help me better understand how to handle jumping and gravity? What's a better approach to handling this? Like I said, I'm new to game development so I could be doing it completely wrong. Any help would be appreciated.

    Read the article

  • Project. Properties.Settings versus plain old appSettings?

    - by BryanG
    I have an assembly built that uses appSettings in the app.config...pretty straight forward. however, I'm referencing this assembly in a web service, and that web service contains the nAnt build file for this service plus being the entry point for everything. Ideally I'd like to be able to set the assembly's appConfig values from the build file, but is this possible? Or do I have to switch to using the Settings values of the assembly and do something like this in the build: <xmlpoke file="${PublishLocation}\web.config" xpath="//applicationSettings/Namespace.AssemblyClass.Properties.Settings/setting[@name='ExchangeServer']/value" value="${ServerName}" /> You get the idea. Is this possible with just a config? My ideal situation would be to keep the settings more flexible in the appConfig so that when everything is on the server, if frogs rain down, I can update the assembly's config values without rebuilding the solution. Is this even possible (the xpath is wrong, it's just an example of what I'd like to do): <xmlpoke file="${PublishLocation}\web.config" xpath="//appSettings/Namespace/AssemblyClass/add[@key = 'ExchangeServer']/@value" value="${a}" />

    Read the article

  • Visual Studio - How to use an existing vsproj's project settings as a template for new project?

    - by Jakobud
    There is some software I want to write a plugin for. The software includes some sample plugins. I want to create a new fresh project but I want to use one of the sample plugin vsproj's project settings as a template. It doesn't seem very clear on how to do this. If I do "New Project From Existing Code" that only imports the cpp, h, etc files into the new project. Right now the only way I can see to copy a sample projects settings is to open two instances of VS2005 next to each other and simply mimic the settings... Surely there is a built in method of doing this?

    Read the article

  • How to temporarily change all default user settings without destroying the original?

    - by mystify
    My app is based strongly on a lot of NSUserDefault keys and values. I want to implement a temporary defaults profile which the user can activate to get a special task done easily. For this, some of the user defaults must be changed temporarily so the app adjusts it's interface appropriately. I started to just manually change those NSUserDefaults settings, but this also destroys the user's original settings. Is it possible to keep a backup of the user's NSUserDefault settings and restore them after the user quits the temporary mode or the app? Like I see it, NSUserDefaults actually is just an NSMutableDictionary which is generated out of a plist file. So I would just make a deep copy of that and later assign that copy somehow back to NSUserDefaults?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >