Search Results

Search found 30087 results on 1204 pages for 'default package'.

Page 322/1204 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • How to Set "<options>" in fstab - Manual Mounting is Successfull

    - by nicorellius
    I am not real familiar with fstab yet, so I have a couple questions. When I use this command to mount devices: sudo mount -t cifs -o username=admin //192.168.1.134/share_name /mnt/share_name With passwords: sudo="local user password" password="password for device" How do I translate that into an fstab entry? So far, I have tried this in the fstab and mounting fails: //192.168.1.134/share_name /mnt/share_name cifs default 0 0 This is where the question comes in. Where I have default, should there be something instead, indicating username=admin, etc?

    Read the article

  • Implementing Foreach Looping Logic in SSIS

    With SSIS, it is possible to implement looping logic into SSIS's control flow in order to define a repeating workflow in a package for each member of a collection of objects. Rob Sheldon explains how to use this valuable feature of SSIS. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Launcher icons are invisible after upgrade from 11.10 to 12.04

    - by Clo Knibbe
    I am re-purposing an old laptop. I installed 11.10 on it and then immediately upgraded to 12.04. (I could not directly install 12.04 as my system does not support PAE.) When my system was (briefly) 11.10, the desktop appeared as expected. However, after the upgrade to 12.04, the icons in the launcher area are invisible. If I hover over the spot where the icon should be the little popup window showing the tool's name appears, and I can click to invoke the tool. I just cannot see the icons. ![invisible icons in launcher][1] The icons do appear as expected in other contexts, for example in the Home folder and in Dash Home. My theme is "Ambiance (default)" I do not have a ~/.icons folder. This is the top level contents of /usr/share/icons: default DMZ-Black DMZ-White gnome handhelds hicolor HighContrast HighContrastInverse Humanity Humanity-Dark locolor LoginIcons LowContrast redglass ubuntu-mono-dark ubuntu-mono-light unity-icon-theme whiteglass (Sorry for the poor formatting, can't get it to show in list.) I suspect that the launcher isn't looking for the icons in the right place, but I don't know how to confirm that, or how to correct. This is my first foray into Linux, although I used to use Unix a few decades ago. This doesn't look much like my old Sun workstation, though! Does anyone have any suggestions or insights for me? Thanks.

    Read the article

  • Network interface selection

    - by Antonino
    Hello. Suppose I have more than a network interfaces and I want to selectively use them per application. eth0 is the standard interface with the standard gateway in the main routing table eth1 is another interface with a different gateway. Suppose I launch an application as a user "user_eth1". I used the following set of rules for iptables / ip rules. IPTABLES: iptables -t mangle -A OUTPUT -m user --uid-owner user_eth1 -j MARK --set-mark 100 iptables -t nat -A POSTROUTING -m user -uid-owner -o eth1 user_eth1 -j SNAT --to-source <eth_ipaddress> IPRULE: ip rule add fwmark 100 lookup table100 and i build "table100" as follows (no doubts on that) ip route show table main | grep -Ev ^default | while read ROUTE; do ip route add table table100 $ROUTE; done ip route add default via <default_gateway> table table100 It doesn't work at all. What's wrong with this? Thank you in advance!

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • What packages do I install for ffmpeg and libmp3lame?

    - by matt wilkie
    On my desktop Ubuntu 10.4 I use ffmpeg to convert videos to a format my dvd-player understands. On my laptop running 10.10 I can't get the same command to work, what package(s) are missing? ffmpeg -i infile.flv \ -threads 2 -vcodec mpeg4 -vtag divx -acodec libmp3lame -sameq \ outfile.avi #...snip Unknown encoder 'libmp3lame' Using apt-cache search libmp3lame I'm told there exist libmp3lame0 and libmp3lame-dev, both of which I've installed. Using acodec libmp3lame0 doesn't work either.

    Read the article

  • htaccess order Deny,Allow rule

    - by aspiringCodeArtisan
    I'd like to dynamically add IPs to a block list via htaccess. I was hoping someone could tell me if the following will work in my case (I'm unsure how to test via localhost). My .htaccess file will have the following by default: order allow,deny allow from all IPs will be dynamically appended: Order Deny,Allow Allow from all Deny from 192.168.30.1 The way I understand this is that it is by default allow all with the optional list of deny rules. If I'm not mistaken Order Deny,Allow will look at the Deny list first, is this correct? And does the Allow from all rule need to be at the end?

    Read the article

  • What were the typical price range for a development kit?

    - by Vaughan Hilts
    A lot of these things are usually behind NDAs and tricky to gauge. Is there any information and ranges on how much these thing costs? For example, the PS1 had the "Net Yaroze", which according to Wikipedia was: "For about $750 USD, the Net Yaroze (DTL-H300x) package would contain a special black-colored debugging PlayStation unit with documentation, software, and no regional lockout." So, what were the prices of some game development kits? P.S: This might make a good community wiki

    Read the article

  • How can I disable recent documents in Unity?

    - by detly
    How do I disable the tracking and display of recently opened files (and whatever else is remembered) in a default installation of Ubuntu 11.10? (Note that this is not a duplicate of How can I keep recent files from appearing in Unity?, since that question and its answers are concerned with temporary and specific filtering. I want to disable it completely for a single user account.) Okay, to deflect the inevitable and expand on my motivation... While trawling the usual forums and Google results for a solution, it (unsurprisingly) seems that the near-universal use cases for this request are either browsing porn or Warhammer research. And the obvious solution to this is to create another user account to contain all evidence. However, this is not why I'm asking, and I don't say that to get all high and mighty about it, it's because this answer won't help. (Even though I really don't have any interest in Warhammer, and I have no idea how that paint pot and brush ended up in my drawer, no that's not glue on my thumb, etc.) My actual use case is that I use my personal laptop for presentations in different circles of my life. I have a user account set up with all the settings I like for presentations (shortcuts, small launcher, default associations, etc). But I don't want an accidental keystroke (or the find dialog) to display other recent presentations I've given, or the files I used in composing the presentation, or whatever. I also don't want to have to recreate this profile for every single presentation I might give. I just want a nice little isolated, memoryless, clean corner of my notebook for public display.

    Read the article

  • Why i see the following errors while updating in terminal?

    - by Harsh
    Fetched 1,103 kB in 1min 2s (17.6 kB/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release W: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How do you run XBMC on nvidia dual screen and stop it from taking over the keyboard and mouse?

    - by Paul Swartout
    I have set up dual screen under Ubuntu 12.04. I have a GeForce 8500 GT and have used the nVidia control panel to set up dual screen in "Separate screen mode". Here's the resulting xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Maxdata/Belinea B1925S1W" HorizSync 31.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "CRT-1: 1280x768 +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-1: 1360x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection All well and good and I have a nice blank XWindow displayed on my TV (the 2nd monitor). I then fire up XBMC from a terminal on the PC monitor using DISPLAY=:0.1 xbmc XBMC fires up quite nicely on the TV however I can no longer use the main PC monitor / mouse / keyboard as XBMC on the TV screen seems to have focus. I was hoping to have XBMC running on the TV and let the kids use the MCE remote whilst I get on with my work on the PC monitor. Does anyone have any idea how to overcome this? I'm presuming there's some xorg.conf fun and games needed but I've no idea where to start to be honest.

    Read the article

  • Exchange 2007 Email Address Policies

    - by Ryan Migita
    We have recently upgraded to Exchange 2007 (from 2003) and have noticed the change from recipient policies to email address policies. We have two separate domains (let's call them domaina.com and domainb.com) we receive email for, have email address policies and both email address policies are not applied. In our Exchange 2003 environment, domaina.com was the default email address when we created new mailboxes and due to the migration, domainb is the default (and its email address policy is a higher priority). Now, when we create a new mailbox (or edit existing ones), the primary email address becomes domainb.com. Now the question is, is this as simple as putting the email address policies in the correct order? Do I have to apply both policies? What effect will the above changes make to existing mailboxes? Since we do not have any conditions set on the policies, I assume prior to making these changes, I should force all domainb mailboxes to not automatically update email address based on policy? Thanks in advance!

    Read the article

  • Where does the YouTube video files stored on system nowadays?

    - by souravc
    When I open a youtube video with firefox I could not find any video file inside ~/.mozilla/firefox/<some_string>.default/Cache/. I also tried with google-chrome like, ps ax | grep flash ls -l /proc/[*PID*]/fd | grep Flash But again /proc/[*PID*]/fd does not have any video like file. ls -l /proc/[*PID*]/fd have some results like lrwx------ 1 root root 64 Nov 9 12:18 22 -> /run/shm/.com.google.Chrome.eOsHNu (deleted) lrwx------ 1 root root 64 Nov 9 12:18 23 -> /run/shm/.com.google.Chrome.p8h6BL (deleted) Result of ls -l /proc/[*PID*]/fd | grep Flash for some videos from other site is like lrwx------ 1 root root 64 Nov 9 12:35 26 -> /home/username/.config/google-chrome/Default/Pepper Data/Shockwave Flash/.com.google.Chrome.QMzxP8 (deleted) But could not be copied. Then what are the places where firefox or google-chrome stores the streaming videos? And is it possible to recover(copy) a video from there to watch them offline? P.S I have other ways(downloaders) to have streaming videos, but my question is very specific.

    Read the article

  • Force startx to run X in a specific resolution and refresh Rate

    - by Z9iT
    From my past experience (using Win-Xp), this particular monitor works only on 60Hz , Best resolution being 1024x768. I have "installed and configured" Ubuntu 12.04 Minimal (on USB stick) so that most of the time terminal is used, however, whenever there is a need to enter GUI, I may issue startx command to go into gnome. However the problem is that on this particular system, issuing this command poses problem because its default refresh rate won't synchronize with the monitor. The display keep on flickering and utterly unreadable. It is visible that gnome has been loaded and default wallpaper and desktop items are visible. But the problem is due to refresh rate different than 60Hz. I am looking for a command attribute to startx command which will force the refresh rate to 60Hz and resolution preferably to 1204x768 I can open terminals with Ctrl+Alt+T and enter commands. Key combinations like Ctrl+Alt+NumPlus works flawlessly in distributions like solaris, but it's not working for me. Also the commands like xrandr -r 60 60 being refresh rate wont work. The same problem is faced even when I boot from a live CD

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • Solaris 11 LKSF

    - by nospam(at)example.com (Joerg Moellenkamp)
    After having some discussions i now made my mind about it: In the next weeks you will see many republications of old articles in the blog as i will republish all articles in the LKSF, however checked and updated for Solaris 11 (some Opensolaris based stuff in the lksf is working slightly different, and if it's just for different package names). However this will take time, as i will do this on weekends and evenings. At the end i will just recollect them and create a Solaris LKSF pdf again.

    Read the article

  • How do I fix the HDMI/DVI display output with Intel HD 4000 Graphics in 12.04?

    - by YumYumYum
    I have an Alienware Dell PC with Intel HD 4000 Graphics (Ivy Bridge) as verified by the output of lspci | grep VGA posted below. 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) The PC only has HDMI and DVI display outputs and using the HDMI output I am only being offered abnormal resolutions. As you can see below it does not even list HDMI1 or DVI1 but just only a fallback. $ export DISPLAY=:0.0 && xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1360 x 768, maximum 1360 x 768 default connected 1360x768+0+0 0mm x 0mm 1360x768 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 How can I fix this? Does it just need to be configured differently or will I need to use a newer kernel (as Intel Graphics drivers are included in the kernel)? Follow up: kernel to latest Step 1: Go to: http://kernel.ubuntu.com/~kernel-ppa/mainline/ Go to last: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/ Download: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3_3.6.0-030600rc3.201208221735_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-extra-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb Step 2: sudo dpkg -i linux*.deb Step 3: reboot which shows that i have Ubuntu 12.04 with latest $ uname -a Linux sun-Alienware-X51 3.6.0-030600rc3-generic #201208221735 SMP Wed Aug 22 21:36:32 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux But still same problem remain.

    Read the article

  • Switching DVI socket

    - by lurscher
    I have Ubuntu 10.10 x86_64 with Nvidia 9800 gt and Nvidia driver version 270.41.06 My video card has two DVI sockets, but I only use the single monitor configuration. Now, I think the main DVI socket might be busted, so I want to try to enable the other as the main one, however, I don't know how to achieve that. I tried just plugging the monitor in that socket but it won't auto-detect it (it would have been way too easy to just work). This is my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "AOC" HorizSync 31.5 - 84.7 VertRefresh 60.0 - 78.0 ModeLine "1080p" 172.8 1920 2040 2248 2576 1080 1081 1084 1118 -hsync +vsync Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GT" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "CustomEDID" "CRT-0:/home/charlesq/lg.bin" Option "TVStandard" "HD1080p" Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1080p +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Indicator applet in MATE

    - by balping
    I hate gnome3, so I installed MATE (gnome2 fork) in my Ubuntu 12.04 desktop. It's pretty good, but I have a problem: it doesn’t have (or I haven't found) a working indicator applet on the top panel. I installed mate-indicator-applet package, but it displays only a message icon, but I need volume-control with rythmbox panel, ubuntu1 and a system menu (logoff, shutdown, hibernate, etc.) So is it possible to set the indicator applet like the normal gnome? This is what I have: And this what I want to have:

    Read the article

  • Cannot Boot, How to recover

    - by Kendor
    Am running 11.10 64-bit with Gnome-shell. Something happened late Friday whereby my machine never gets to the login screen. I do get to an Ubuntu splash logo, after that I get a text screen that it hangs on. The screen is referring to issues with mounting various network resources, including VMWare and also some references to my NAS that are in fstab. If I hit "esc" I can get to the GRUB menu and into recovery console. If I try to do a file system check, I run into a similar error screen that I see when trying to boot normally. A possible clue here is that during my last good session I made some mods to the /etc/hosts file to reference another system which I'm connecting to with Synergy. I don't believe I have hardware issues as I'm able to boot properly with a Live USB and connect to my network/Internet. A few more tidbits. I have regular Dejadups backups on my NAS. I have a good Clonezilla whole drive image which is 4-6 weeks old.. My home is encrypted. I thought I'd try blowing away my hosts file via live USB, but when I mounted the hard drive everything was read-only and I couldn't figure out how to replace it. P.S. I logged in via CLI and modded the host file to remove the entry I'd made, to no avail. System continue to gets stuck on the following: CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel version 3.1s Would love some sober advice on how to attack this.

    Read the article

  • Yum install packages to an alternate directory, without chroot?

    - by Stefan Lasiewski
    I would like to use the Foswiki yum repository to install Foswiki (296 packages). The default installation path is /var/lib/, but I want to install it to an alternate location at /opt/www/. In the future, I still want to use yum to check for and apply updates to the packages. Is it possible to use yum to install packages to an location which is different then the default location provided by RPMs? Does Yum provide anything similar to ./configure --prefix=/usr/local/ or rpm --install --prefix=/opt/local? Yum provides the --installroot option, but that appears to primarily for chroot environments.

    Read the article

  • Update Errors in Xubuntu 12.10

    - by wil
    I updated by computer from 12.04 to 12.10 and after I finished updating when I turned on my computer I am unable to update my computer. I tried install a new copy of 13.04 but my cpu doesn't support pae. I have a IBM Thinkpad T42 with a 1.7 gigahertx Cpu. When updating through terminal This is the output. sudo apt-get upgrade: [sudo] password for wil: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not installed linux-image-generic : Depends: linux-image-3.5.0-34-generic but it is not installed E: Unmet dependencies. Try using -f. sudo apt-get upgrade -f: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following NEW packages will be installed: linux-image-3.5.0-34-generic 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/11.8 MB of archives. After this operation, 25.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 191530 files and directories currently installed.) Unpacking linux-image-3.5.0-34-generic (from .../linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb) ... This kernel does not support a non-PAE CPU. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) wil@wil-ThinkPad-T42:~/Desktop$

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >