Search Results

Search found 15447 results on 618 pages for 'multi mode'.

Page 139/618 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • Lenovo G580 video driver does not work

    - by Evgeniy
    I have trouble with video driver. In my laptop, I have 2 video cards: GeForce 630M and Intel HD 4000 Series. I installed 12.04, it starts in Unity 2D mode, I try to install drivers to both video cards, but results inefficiencies. Nvidia control panel propose me to create xorg config file manually, but after this screen resolution resets to 640x480 and I can't get it back to normal mode. Please, help me if this is possible.

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • Kernel panic - not syncing: Attempted to kill init!-grub rescue

    - by Nimish
    I have windows8 and ubuntu 13.10 installed on my laptop in dual-boot mode.While updating the disk management system my pc got hanged and on restarting it have grub rescue mode prompting on the screen.I tried ls (hd0,msdos_) all the commands but it shows 'unknown file system".i tried booting with ubuntu,boot-repair,superboot live cd but it shows "Kernel panic - not syncing: Attempted to kill init!" and hangs on the screen with two blinking lights.please suggest some help as soon as possible thanks in advance

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • How do I reinstate my admin user privileges to global read/write

    - by Matt
    I am running Ubuntu 12.04 LTS. I only have the one user which I created when I installed Ubuntu. Everything has been fine - love it - until I updated a software package recently from the command line using sudo (not gksudo). I was having a little bother which did not make sense to me and in a fluff changed my user read/write privileges through the GUI (not even clear how I got there!). After restart I was stuck in a login loop - using the right login password but kept getting looped back to the login and could only login as Guest. I could still login with my user/password via ctrl + alt + f1 Eventually I was able to login again at start up. Not sure exactly what it was I changed that worked but it was one of/or a combination of installing latest security updates, changing login manager from LightDM to DGM and back again, removing the ICE/Xauthority and chown user. Current dilemma is my primary admin user privileges were read only. In the command line ls -ls /home/user returned this value: drwx------ 48 username username 20480 I have since changed this using sudo chmod 0755 /home/username (from my limited understanding 755 should return my user privileges to their original read/write glory). ls -ld /home/user currently shows my user privileges as: drwxr-xr-x 48 username username 20480 I still seem to have only read access permissions. I've been through lots of threads (and the help file) that talk about creating new users/groups permissions etc. but specific info on returning my existing global/admin/primary users privileges to what they were when I first created that user - baffling me. I feel this is something really simple I'm just not getting it. Please help! sudo mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /proc type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusect1 (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=07pe tmpfs55) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw, ,nosuid,nodev,size=5242880 none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/meng/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=meng) none on /tmp/guest-1R2Fi5 type tmpsf (rw,mode=700)

    Read the article

  • PowerShell &ndash; Recycle All IIS App Pools

    - by Lance Robinson
    With a little help from Shay Levy’s post on Stack Overflow and the MSDN documentation, I added this handy function to my profile to automatically recycle all IIS app pools.           function Recycle-AppPools {     param(     [string] $server = "3bhs001",     [int] $mode = 1, # ManagedPipelineModes: 0 = integrated, 1 = classic     )  $iis = [adsi]"IIS://$server/W3SVC/AppPools" $iis.psbase.children | %{ $pool = [adsi]($_.psbase.path);    if ($pool.AppPoolState -eq 2 -and $pool.ManagedPipelineMode -eq $mode) {    # AppPoolStates:  1 = starting, 2 = started, 3 = stopping, 4 = stopped               $pool.psbase.invoke("recycle")      }   }}

    Read the article

  • How to apply patches or upgrade BAM 11g

    - by anirudh.pucha(at)oracle.com
    In general, before upgrading to latest patchset or applying any BAM adapter patches, always make sure the BAM Adapter staging-mode is set to "nostage". This configuration can be verified by searching "OracleBamAdapter" key word in MiddlewareHome/user_projects/domains//config/config.xml file.To redeploy bam adapter, you should pick "I will make the deployment accessible from the following location" as the "Source accessibility" option and set the path to point to /Oracle_SOA1/soa/connectors/OracleBamAdapter.rar, otherwise, the staging-mode will be unset

    Read the article

  • VirtualBox 4.2.14 is now available

    - by user12611829
    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog. VMM: another TLB invalidation fix for non-present pages VMM: fixed a performance regression (4.2.8 regression; bug #11674) GUI: fixed a crash on shutdown GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171) VRDP: fixed a rare crash on the guest screen resize VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462) USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839) USB: properly handle orphaned URBs (bug #11207) BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests) BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4) DMI: allow to configure DmiChassisType (bug #11832) Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479) Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502) Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617) 3D: seamless + 3D fixes (bug #11723) 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718) Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765) Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719) Main/Display: don't lose seamless regions during screen resize Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845) Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814) Main/OVF: don't fail if the appliance contains multiple file references (bug #10689) Main/Metrics: fixed Solaris file descriptor leak Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610) Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified Windows Additions: fixed problems with partial install in the unattended case Windows Additions: fixed display glitch with the Start button in seamless mode for some themes Windows Additions: Seamless mode and auto-resize fixes Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709) X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode OS/2 Additions: made the mouse wheel work (bug #6793) Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792) The full changelog can be found here. You can download binaries for Solaris, Linux, Windows and MacOS hosts at http://www.virtualbox.org/wiki/Downloads Technocrati Tags: Oracle Virtualization VirtualBox

    Read the article

  • How to let screen time out on sign in screen

    - by Aren Cambre
    I have Ubuntu 13.10 set up on an older PC. If I am logged in as a user, then the screen timeout and power saving mode works as expected. However, if nobody is logged in, the screen never times out, and the monitor stays on all the time with the login screen. How can I adjust Ubuntu 13.10 so that the login screen also times out after a minute or so? I don't want the monitor's power saving mode to be disabled just because nobody is currently logged in.

    Read the article

  • How to add GUI to command-line Ubuntu?

    - by SlimGG
    I had installed Ubuntu on a VM, with a command-line only interface, now I have had enough of command-line interface experience and plan to proceed with GUI mode. But I dont want to install a fresh copy of Ubuntu with GUI mode; I want my current [CLI] to have GUI. I have Ubuntu 12.04 LTS installed as CLI, and desired to have some light-weight (less resource consuming, slow-system friendly) GUI. What are my options and how can I accomplish this?

    Read the article

  • Microsoft JScript runtime error in Visual studio 2010

    - by anirudha
    There is many tool exist to debug JavaScript visual studio , firebug and some other great plugin are one of them. here I show you solution for Microsoft JScript runtime error: Object doesn't support this property or method  if you search on Google for how to debug Javascript in Visual studio. all of them told you to follow this instruction :- go to Internet option in IE > advanced tab > Browsing section > uhcheck the both option disable script debugging for IE (internet explorer) disable script debugging (others). that’s the information you read are outdate or not true these days. in visual studio you can play with JavaScript debugging even these option is check or unchecked. off-course you  can try these step in express version of visual studio. I found a little problem that my code [based on jQuery plugin] try to call some function. some of them not implement in browser so they call other so that’s fine and work in every browser. when visual studio found some function they trying to call and not implemented in browser I use to debug they tell me Microsoft JScript runtime error: Object doesn't support this property or method  they tell me again whenever I put refresh [f5] in browser. so the thing they do you never like that. see error window first whenever code is not buggy and see everytime before see the page you want to see. this behavior harsh you. there is no problem whenever you not commonly used IE but whenever you really want to debugging you got some pain too from the behavior of this. well I have some patch for that if you really like the debugging in Visual studio with IE. so if you sure that code is not buggy or you really not want to see that’s window here is trick. when you debug the JavaScript in IE choose the compact mode and they never force you to see window first who tell the thing you not want to see. How to do that for reliefs from this pain in visual studio after debug the project or website IE gone automatically launch. go to Developer tool by pressing f12 > you see window something like this:- by the way they give you document mode IE 7 as default or browser mode based on your settings. first thing is that you need to set the compact view [any ] in browser mode. and next time the error window never come again who tell you Microsoft JScript runtime error: Object doesn't support this property or method

    Read the article

  • Force gdm login screen to the primary monitor

    - by Kirill
    I have two monitors attached to my video card. Primary monitor has a resolution equal to 1280x1024 and the second has 1920x1200. My gdm login screen always appears on the second monitor even if it is switched off. My question is how to force gdm to show the login screen always on the primary monitor with resolution 1280x1024? I use Nvidia GT9500 videcard in Twinview mode. I can't use Xinerama because vpdau doesn't work correclty in this mode.

    Read the article

  • ubuntu stuck in a login loop after editing profile file

    - by varunit
    I'm stuck in a login loop now. What I did was edited /etc/profile as root and added the following line export PATH = /opt/my jdk 7 path/bin:$PATH After logging out and tried to login, I cannot I have tried booting in recovery mode, entered root prompt and tried to edit the file in vi but it always opens in read only mode and hence cannot be saved. I just need a way to delete that line and boot into ubuntu again. Please help me out guys..

    Read the article

  • Should Exterrnal USB hard drive auto mount Ubuntu 12.04.01 LTS

    - by Chris Good
    I want to have external USB hard drives automatically mounted when plugged in. I have 2 drives exactly the same except for volume label. They both have the same UUID. I want to be easily able to swap them as I'm using them for backups and want to keep 1 at home for off site backup. I've set up the /etc/fstab so they should mount at different places based on their volume label: /etc/fstab : LABEL=Passport1 /media/Passport1 ntfs defaults,windows_names,locale=en_US.utf8 0 0 LABEL=Passport2 /media/Passport2 ntfs defaults,windows_names,locale=en_US.utf8 0 0 blkid shows : ... /dev/sdc1: LABEL="Passport2" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" /dev/sdd1: LABEL="Passport1" UUID="4E1AEA7B1AEA6007" TYPE="ntfs" They both mount automatically during reboot but do not mount when just plugged in to a running system. I've read lots of stuff about this, much of it is old so I'm not sure if it applies. I've read some stuff that says the mounts should happen automatically when plugged in, and lots of other stuff that says you have to install other software to make this happen, although much of it just seems to set up the fstab. What's the real story? Here is /var/log/syslog when drive is plugged in: Dec 14 11:22:58 ausyvutims1 kernel: [66221.300196] usb 1-1: new high-speed USB device number 6 using ehci_hcd Dec 14 11:22:58 ausyvutims1 mtp-probe: checking bus 1, device 6: "/sys/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1" Dec 14 11:22:58 ausyvutims1 mtp-probe: bus: 1, device: 6 was not an MTP device Dec 14 11:22:58 ausyvutims1 kernel: [66221.656020] scsi7 : usb-storage 1-1:1.0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.661534] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.666466] scsi 7:0:0:1: Enclosure WD SES Device 1016 PQ: 0 ANSI: 6 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667739] sd 7:0:0:0: Attached scsi generic sg3 type 0 Dec 14 11:22:59 ausyvutims1 kernel: [66222.667913] ses 7:0:0:1: Attached Enclosure device Dec 14 11:22:59 ausyvutims1 kernel: [66222.668047] ses 7:0:0:1: Attached scsi generic sg4 type 13 Dec 14 11:22:59 ausyvutims1 kernel: [66222.678473] sd 7:0:0:0: [sdc] 1953458176 512-byte logical blocks: (1.00 TB/931 GiB) Dec 14 11:22:59 ausyvutims1 kernel: [66222.687700] sd 7:0:0:0: [sdc] Write Protect is off Dec 14 11:22:59 ausyvutims1 kernel: [66222.687705] sd 7:0:0:0: [sdc] Mode Sense: 47 00 10 08 Dec 14 11:22:59 ausyvutims1 kernel: [66222.701076] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.701081] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.738062] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.738068] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.754558] sdc: sdc1 Dec 14 11:22:59 ausyvutims1 kernel: [66222.792006] sd 7:0:0:0: [sdc] No Caching mode page present Dec 14 11:22:59 ausyvutims1 kernel: [66222.792012] sd 7:0:0:0: [sdc] Assuming drive cache: write through Dec 14 11:22:59 ausyvutims1 kernel: [66222.792016] sd 7:0:0:0: [sdc] Attached SCSI disk Dec 14 11:22:59 ausyvutims1 ata_id[16971]: HDIO_GET_IDENTITY failed for '/dev/sdc': Invalid argument Thanks for any help offered

    Read the article

  • How to Open an InPrivate Tab in the Metro Version of Internet Explorer

    - by Taylor Gibb
    Internet Explorer has a secret mode called InPrivate which is pretty much the same as Chrome’s incognito mode. It can be accessed on the desktop by right-clicking on the Internet Explorer icon on the taskbar, but how do you open an InPrivate tab in the Metro IE? Read on to find out. HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Unknown problem causing major computer failure, Booting problem with windows 7, mainly with 0x0000000A

    - by ken
    Where do I begin? OS=Windows 7 I think it all started when I ran an installation file. I suspect it may have been a virus (even though AVG scan didnt pick anything up). The installation failed, computer crashed then restarted. In the middle of the reboot, I get BSOD. Normal boot up doesnt work so I use safe mode. Method 1: Not a problem I thought cos I will do what I normally do and that was to recover from my image file. Unfortunately, my Acronis software cant recover in safe mode. Method 2: I created a bootable disc for the Acronis recovery software. Managed to boot to Acronis and started the recovery from image file. This fail with some error message (did not manage to record). Something to do with not be able to copy to $AVG folder. Method 3: At this stage, assumed it was still a virus causing the problem so decided to format that partition to remove everything and hopefully the virus too. Had a lot of problems trying to bypass the system to allow me to format but (i think- more on this later) I managed to do that. Image was recovered, thought problem was resolved. Tried to boot windows but new error: Boot Manager is missing. Read up on this and managed to copy the Boot Manager from my Laptop's Manufacturer's partition (partition contains factory setup image file). Windows loaded but new BSOD with 0x000000A problem. Method 4: Attempted to reinstall factory settings but this failed cos i suspect by formating the partition, I may have removed the recovery software. Tried to create a bootable dvd of factory setting but machine is so bad it continues to crash. Bootable dvd method failed. Method 5:Spent alot of time reading up on this error, even installed a software to help scan and fix the problem. Scan failed and software required money! Anyway, lots of BSOD with different error message like 0x00000001A and 0x0000000D1. Error message changes with some reboots. Method 6: Found a hotfix from the windows site to fix 0x0000000A problem, great I thought! In safe mode, I cant install the file cos of error:0x8007043c. Tried to then install the fix in normal mode but installation just hangs. Returned to safe mode and followed advice to bypass 0x8007043c by changing the BITS status (read here: http://www.vistaheads.com/forums/microsoft-public-windowsupdate/181931-error-number-0x8007043c-windows-update.html). However, my machine at this time is so flaky that it hangs everytime i right mouse click the computer icon. I am at my wits end. Ya help or ideas? Cheers

    Read the article

  • Email client wont connect to SMTP Authentication server

    - by Jason
    Im having trouble installing SMTH Auth for my ubuntu email server. I have followed ubuntu own guide for SMTH AUT (https://help.ubuntu.com/14.04/serverguide/postfix.html). But my email client thunderbird is giving this error " lost connection to SMTP-client 127.0.0.1." I cant add new users to thundbird either because of this connection problem. Do i have to alter any setting on my Thunderbird perhaps since ? I did try to make thunderbird use SSL for imap as well but that neither works. I restarted postfix and dovecot to find errors but both run just fine. Prior to SMTP auth changes thunderbird could connect just fine to my server and send mails. This is my main.cf file in postfix. It looks just like the one on ubuntu guide above. readme_directory = no # TLS parameters #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mysite.com mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir/ mailbox_command = #SMTP AUTH smtpd_sasl_type = dovecot smtpd_recipient_restrictions=permit_mynetworks, permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_auth_only = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes This my dovecot configuration at 10-master.conf service imap-login { inet_listener imap { #port = 143 } inet_listener imaps { #port = 993 #ssl = yes } # Number of connections to handle before starting a new process. Typically # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0 # is faster. <doc/wiki/LoginProcess.txt> #service_count = 1 # Number of processes to always keep waiting for more connections. #process_min_avail = 0 # If you set service_count=0, you probably need to grow this. #vsz_limit = $default_vsz_limit } service pop3-login { inet_listener pop3 { #port = 110 } inet_listener pop3s { #port = 995 #ssl = yes } } service lmtp { unix_listener lmtp { #mode = 0666 } # Create inet listener only if you can't use the above UNIX socket #inet_listener lmtp { # Avoid making LMTP visible for the entire internet #address = #port = #} } service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP processes (connections) #process_limit = 1024 } service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service auth { unix_listener auth-userdb { #mode = 0600 #user = #group = } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } service dict { # If dict proxy is used, mail processes should have access to its socket. # For example: mode=0660, group=vmail and global mail_access_groups=vmail unix_listener dict { #mode = 0600 #user = #group = } } I did add auth_mechanisms = plain login to 10-auth.conf as well.

    Read the article

  • Weird Ubuntu Desktop Boot Partition On External Hard Drive

    - by Magnitus
    I have a Thinkpad with Windows 7. Last time I installed an Ubuntu/Windows dual boot, Windows was never same after and regularly got corrupted so this time, I installed Ubuntu on a separate external hard drive. I took a 500 GB external hard drive and used Windows to shrink the partition on it to 400 GB, freeing 100 GB to install Ubuntu. Then I modified the booting priority of my computer to boot from the external hard drive if present. Then, I installed Ubuntu desktop on the external hard drive using a DVD, picked the most simplistic partitioning scheme I could get away with (didn't go auto as it didn't include the external hard drive as a choice) and voilà. Fast forward some time and I'm trying to refresh my understanding of Linux partitions to install a bunch of servers, so I'm looking at the current partitioning scheme on my external hard drive and find the boot partition puzzling... sda is my integrated hard drive with Windows 7. sdb is my Ubuntu desktop external hard drive. Running parted on sdb, I get this: (parted) print Model: WD My Passport 0740 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 393GB 393GB primary ntfs boot 2 393GB 500GB 107GB extended 5 393GB 425GB 32.8GB logical linux-swap(v1) 6 425GB 500GB 74.6GB logical ext4 At this point, I'm wondering why the ntfs partition is flagged as "boot" and not my ext4 partition which is the partition that contains / (and by extension, /boot since it's not on its own separate partition). Looking at mtab only confirms what I already know: eric@eric-ThinkPad-W530:~$ sudo cat /etc/mtab /dev/sdb6 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/cgroup tmpfs rw 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0 none /sys/fs/pstore pstore rw 0 0 systemd /sys/fs/cgroup/systemd cgroup rw,noexec,nosuid,nodev,none,name=systemd 0 0 gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,user=eric 0 0 /dev/sdb1 /media/eric/My\040Passport fuseblk rw,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 My lack of understanding concerning this is not vital to anything (this is only my development desktop partition), but somehow annoys me. Any insight that could shed some light on this would be welcome.

    Read the article

  • Storage device manger regarding NTFS automount at boot time

    - by muneesh
    I am using storage device manager to auto-mount NTFS file system at boot time.But repeatedly, I am trying to uncheck the checkbox listed 'read only mode' in assistant option of storage device manager. I am not able to to auto-mount my NTFS partition in read/write mode. Please suggest a solution regarding this problem? Remember I am repeatedly trying to uncheck read only checkbox but not able to do that!

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >