Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 177/866 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • How do I login to SQL Server without having to use "Run as Administrator" when starting Management S

    - by MedicineMan
    When I start Management Studio, unless I use the "Run as Administrator" selection, I cannot login to my local SQL Server. Is this normal? I am a normal developer and don't believe I have a need for high security on my local machine. I'm running SQL Server 2008, Windows 7. The error I get is: Cannot connect to (local) Additional Information Login failed for user 'MYCOMPUTER\MyName'. (Microsoft SQL Server, Error: 18456)

    Read the article

  • Installing vim7 on Solaris Sparc 2.6 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • how to prevent other computers from seeing our network computers through vpn

    - by Disco
    We have a local office domain consisting of Windows 7 and XP machines that is running on Windows Server 2008 R2. We also have users that connect via VPN into our network. My concern is that when a remote user opens up a folder, the Network section on the left side of the folder shows the remote user all the computer names in our local network. I would like to go about renaming our computers in the local network with more descriptive computer names, but I do not want the users off-site to be able to see these computer names by simply opening up a folder. (Granted, they can already do this, but our current naming scheme does not link computer names to users.) I would like to change our computer names so we can determine which computer belongs to which user more easily IF it can be done securely. How can I ensure that our local computer names are not showing up in the Network folder for remote, VPN-connected users? My online searches have turned up results where people are advised to turn off Network Sharing and Discovery, but that seems to only ensure that the local machine doesn't see other computer names. I want to prevent OUR computer names from showing up on OTHER computers, and I can't go into the VPN-connected computers and turn off THEIR Network Discovery settings. I would think there is a group policy that would control this but I have not found one yet and I don't know how I would apply it to VPN-connected computers. Thanks! EDIT: That's true, a Group Policy wouldn't run on users only connecting via VPN, good point. What about a VPN/router policy, then?

    Read the article

  • mysql.sock problem on Mac OS X, all Zend products

    - by Michael Stelly
    Hi folks. I posted this on the Zend forum, but I'm hoping I can get a speedier reply here. I've tried every solution provided on this forum with no luck. When I restart mysql, everything appears ok. sudo /usr/local/zend/bin/zendctl.sh restart Password: /usr/local/zend/bin/apachectl stop [OK] /usr/local/zend/bin/apachectl start [OK] Stopping Zend Server GUI [Lighttpd] [OK] spawn-fcgi: child spawned successfully: PID: 7943 Starting Zend Server GUI [Lighttpd] [OK] Stopping Java bridge [OK] Starting Java bridge [OK] Shutting down MySQL . SUCCESS! Starting MySQL . SUCCESS! Pinging locahost is also OK and resolve dns to IP. ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.048 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.076 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.064 ms But when I attempt to access the local url for my app, I get the dreaded: Message: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2). This is a show-stopper for me. I appreciate any assistance. Thank you.

    Read the article

  • netstat on fresh install of Solaris 10 update 9

    - by cjavapro
    I am attempting to decipher the below output bash-3.00$ netstat -a UDP: IPv4 Local Address Remote Address State -------------------- -------------------- ---------- *.sunrpc Idle *.* Unbound *.32771 Idle TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- *.* *.* 0 0 49152 0 IDLE *.sunrpc *.* 0 0 49152 0 LISTEN *.* *.* 0 0 49152 0 IDLE localhost.5987 *.* 0 0 49152 0 LISTEN localhost.898 *.* 0 0 49152 0 LISTEN localhost.32771 *.* 0 0 49152 0 LISTEN localhost.5988 *.* 0 0 49152 0 LISTEN localhost.32772 *.* 0 0 49152 0 LISTEN *.ssh *.* 0 0 49152 0 LISTEN *.32785 *.* 0 0 49152 0 BOUND localhost.6788 *.* 0 0 49152 0 LISTEN localhost.6789 *.* 0 0 49152 0 LISTEN localhost.32782 *.* 0 0 49152 0 LISTEN localhost.smtp *.* 0 0 49152 0 LISTEN localhost.submission *.* 0 0 49152 0 LISTEN server-host-name.ssh pc-host-name.51269 64868 51 49640 0 ESTABLISHED TCP: IPv6 Local Address Remote Address Swind Send-Q Rwind Recv-Q State If --------------------------------- --------------------------------- ----- ------ ----- ------ ----------- ----- *.* *.* 0 0 49152 0 IDLE *.ssh *.* 0 0 49152 0 LISTEN SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr ffffffff84e25ab8 stream-ord ffffffff8569c740 00000000 /var/run/.inetd.uds bash-3.00$ It looks to me like we have the following items UDP IPv4 Open ports sunrpc, 32771 Question 1: What is *.* Unbound? TCP IPv4 Open ports sunrpc, ssh 10 ports open only for localhost The open ssh connection from my PC Question 2: What is *.32785 *.* 0 0 49152 0 BOUND? Question 3: What is *.* *.* 0 0 49152 0 IDLE? (shows up twice) IPv6 Open port ssh Question 3: What is *.* *.* 0 0 49152 0 IDLE? Question 4: What is SCTP? Question 5: What is Active UNIX domain sockets

    Read the article

  • Why are network printers not available in the Add Printer Wizard...when run over a network?

    - by Kev
    From a Windows 2003 server machine I browsed the network to an XP client (\computername in Explorer) then double-clicked Printers and Faxes and then Add Printer. In the wizard, normally the second screen asks if you want to install a local printer or a network printer. Well, in this case, it seems to assume I want a local printer, because the second screen is what would normally be the third screen if you chose local printer and clicked Next. I want to install a network printer on a remote machine for its local users. Is this not possible? If not, why not?

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • Redhat | error in mod_swgi installation

    - by MMRUSer
    I'm getting the following error when I try to install mod_wsgi ./configure checking for apxs2... no checking for apxs... /usr/sbin/apxs checking Apache version... 2.2.3 configure: creating ./config.status config.status: creating Makefile make /usr/sbin/apxs -c -I/usr/local/include/python2.6 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.6/config -lpython2.6 -lpthread -ldl -lutil -lm /apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.6 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo sh: /apr-1/build/libtool: No such file or directory apxs:Error: Command failed with rc=8323072 . make: *** [mod_wsgi.la] Error 1 mod_wsgi 3.2 Apache 2.2 Python 2.6 apr-1.2.7-11 Is this error because of a missing package.. or else how do I solve this issue?

    Read the article

  • Why does python easy install give me "permission denied" errors?

    - by Golden Sinha
    When i try to install program in ubuntu 12.04 it shows the error. program 1 : home@home-Compaq-610:~/Desktop$ python setup.py install running install running build running build_py creating build creating build/lib.linux-i686-2.7 copying Calculator.py - build/lib.linux-i686-2.7 running install_lib copying build/lib.linux-i686-2.7/Calculator.py - /usr/local/lib/python2.7/dist-packages error: /usr/local/lib/python2.7/dist-packages/Calculator.py: Permission denied . program 2 : home@home-Compaq-610:~/Desktop$ sudo chmod +x Moto.bin [sudo] password for home: home@home-Compaq-610:~/Desktop$ it shows like this but it do not install the program. program 3 : home@home-Compaq-610:~/Desktop$ python setup.py install [ERROR] wxPython2.8 is required. how to install wxPython2.8 please tell. if i try to install this program using easy_install it shows like this. home@home-Compaq-610:~/Desktop$ easy_install editra error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/test-easy-install-6778.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /usr/local/lib/python2.7/dist-packages/ Perhaps your account does not have write access to this directory? If the installation directory is a system-owned directory, you may need to sign in as the administrator or "root" account. If you do not have administrative access to this machine, you may wish to choose a different installation directory, preferably one that is listed in your PYTHONPATH environment variable. For information on other options, you may wish to consult the documentation at: http://packages.python.org/distribute/easy_install.html Please make the appropriate changes for your system and try again. home@home-Compaq-610:~/Desktop$ please help me . please tell how to install programs..

    Read the article

  • Has this server been compromised?

    - by Griffo
    A friend is running a VPS (CentOS) His business partner was the sysadmin but has left him high and dry to look after the system. So, I've been asked to help out in fixing an apparent spam problem. His IP address got blacklisted for unsolicited mail. I'm not sure where to look for a problem, but I started with netstat to see what open connections were running. It looks to me like he has remote hosts connected to his SMTP server. Here's the output: Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10029 ESTABLISHED tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10010 ESTABLISHED tcp 0 1 78.153.208.195:35563 news.avanport.pt:smtp SYN_SENT tcp 0 0 78.153.208.195:35559 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 0 0 78.153.208.195:35560 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11647 CLOSING tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11645 CLOSING tcp 0 0 78.153.208.195:35562 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:35561 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:imap 86-41-8-64-dynamic.b-:49446 ESTABLISHED Does this indicate that his server may be acting as an open relay? Mail should only be outgoing from localhost. Apologies for my lack of knowledge but I don't work on linux in my day job. EDIT: Here's some output from /var/log/maillog which looks like it may be the result of spam. If it appears to be the case to others, where should I look next to investigate a root cause? I put the server IP through www.checkor.com and it came back clean. Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.721674 status: local 0/10 remote 9/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886182 delivery 74116: deferral: 200.147.36.15_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_200.147.36.15./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886255 status: local 0/10 remote 8/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898266 delivery 74115: deferral: 187.31.0.11_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_187.31.0.11./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898327 status: local 0/10 remote 7/20 Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137833 delivery 74111: deferral: Sorry,_I_wasn't_able_to_establish_an_SMTP_connection._(#4.4.1)/ Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137914 status: local 0/10 remote 6/20 Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903536 delivery 74000: failure: 209.85.143.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[78.153.208.195_______1]_Our_system_has_detected_an_unusual_rate_of/550-5.7.1_unsolicited_mail_originating_from_your_IP_address._To_protect_our/550-5.7.1_users_from_spam,_mail_sent_from_your_IP_address_has_been_blocked./550-5.7.1_Please_visit_http://www.google.com/mail/help/bulk_mail.html_to_review/550_5.7.1_our_Bulk_Email_Senders_Guidelines._e25si1385223wes.137/ Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903606 status: local 0/10 remote 5/20 Jun 29 00:02:19 vps-1001108-595 qmail-queue-handlers[15501]: Handlers Filter before-queue for qmail started ... EDIT #2 Here's the output of netstat -p with the imap and imaps lines removed. I also removed my own ssh session Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 1 78.153.208.195:40076 any-in-2015.1e100.net:smtp SYN_SENT 24096/qmail-remote. tcp 0 1 78.153.208.195:40077 any-in-2015.1e100.net:smtp SYN_SENT 24097/qmail-remote. udp 0 0 78.153.208.195:48515 125.64.11.158:4225 ESTABLISHED 20435/httpd

    Read the article

  • Apache refusing to change DocumentRoot

    - by mingos
    I've installed Zend Server CE 5.1.0 on Windows 7 Ultimate 64 bit in its default location, meaning the path to my htdocs is C:\Program Files (x86)\Zend\Apache2\htdocs. Not something that I would like to type each time I check out a project from SVN in Eclipse or something. I'd like to set the DocumentRoot to a different folder, namely D:\www. What I've done I edited conf/httpd.conf, with the significant lines being: DocumentRoot "D:\www" <Directory "D:\www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> Include conf/extra/httpd-vhosts.conf I edited conf/extra/httpd-vhosts.conf to add a virtual host: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot D:\www ServerName localhost ServerAlias localhost SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN localhost </VirtualHost> <VirtualHost *:80> DocumentRoot D:\www\UmbraCMS ServerName umbracms.local ServerAlias umbracms.local SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN umbracms.local </VirtualHost> I edited C:\Windows\System32\drivers\etc\hosts to add this line: 127.0.0.1 umbracms.local And I also added a PHP project to D:\www\UmbraCMS. And restarted Apache. Actually, I restarted the computer, too, just in case. What's supposed to happen After typing http://umbracms.local/ in the browser's address bar, I want to see my PHP project launch, obviously. What's actually happening No matter whether whether I type http://umbracms.local/ or http://localhost/, I'm taken to the test zend page, located in C:\Program Files (x86)\Zend\Apache2\htdocs\index.html, as if neither DocumentRoot was changed nor name-based virtual hosting worked. Interestingly, when I put another project in C:\Program Files (x86)\Zend\Apache2\htdocs\bugraid\ and then, in the browser, typed http://localhost/bugraid, the project actually opened, or at least tried to, as it completely ignored the project's .htaccess file. Extra considerations Zend Server's Apache version is 2.2.16, PHP version is 5.3.0 I've installed MySQL CE 5.5.13 separately, and it works, both from command line and via MySQL Workbench. I have XAMPP installed, but none of its components are started up. It's got its own install of Apache 2.2.17 and MySQL 5.5.1. PHP version is 5.3.5 (I think). Question Have you had a similar situation before? What else might need taking care of in order to have Zend Server's Apache use D:\www as document root for my PHP projects?

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • Windows 2003 and 2008 AD integrated DNS zones

    - by floyd
    We have a Windows 2003 server DC1 which is our primary DC holding all FSMO roles. It also is a DNS server for our domain domain.local which is an active directory integrated zone. We also have a Windows 2008 DC name DC2 All servers have the correct DNS entries etc. However on all dns servers there are event id 4515 indicating there are duplicate zones in separate directory partitions and only one will be used until the other is removed. And I see these, there is a zone for domain.local under the default naming partition CN=System, CN=MicrosoftDNS, DC=domain.local. As well as the DomainDNSZones partition DC=DomainDNSZones, DC=DOMAIN, DC=local, CN=MicrosoftDNS It seems that the partition in the Default Naming partition is the one which is being used currently. Which one should be in use? How do I make the EventID 4515's go away? EventID 4515: http://support.microsoft.com/kb/867464 Thanks

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • OPN Specialized Latest News (15th November)

    - by swalker
    HELPING YOU TO SPECIALIZE WebCenter Implementation Specialist Exam Preparation Webcasts: WebCenter Content And WebCenter Portal Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS. This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend? Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Click here to read more... Specialized Partners Only! New Service to Promote Your Events The Partner Event Publisher has just been made available to all specialized partners in EMEA.  Partners now have the opportunity to publish their events to the Oracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. Click here to read more information and watch a short video demo. VADs Get Specialized Effective November 1, 2011 , VADs, with a valid Value Added Distributor Agreement will no longer be required to meet customer reference requirements outlined in the business criteria section in order to become specialized. VADs must continue meet all other business and competency criteria set forth in the applicable Knowledge Zone prior to specialization approval. New Certification Pillar Axiom 600 Storage System Your opportunity to take the Pillar Axiom 600 Storage System Essentials (1Z0-581) Exam is vailable now in beta. Pass the exam so you can become a Pillar Axiom 600 Storage Systems Implementation Specialist! Free vouchers are available for Oracle Partners! If you would like to receive a free Beta exam voucher, please send your request to [email protected] and include your name, business email address, company, and the Exam name Pillar Axiom 600 Storage System Essentials Beta exam. New Certification Available: Oracle Utilities Customer Care and Billing Oracle Utilities Customer Care and Billing 2 Essentials (1Z0-562) is a solution designed to help you meet market windows and regulatory deadlines while enjoying a low total cost of ownership and a high return on investment. Take the exam now to become an  Oracle Utilities Customer Care and Billing 2 Essentials Implementation Specialists. MEASURING YOUR SUCCESS We had 1674 Specialized Partners covering 5364 Specializations. Please note that due to OPN contract renewals at any given point in time there are valid Specialized Partners and Specializations which are temporarily not captured in the total statistics. An incremental 1961 individuals were accredited as Implementation Specialists giving an EMEA cumulative total of 9598 Implementation Specialists 26 ISVs obtained one or more Ready's, for a total of 53 Ready's Don't forget! You can submit your own press releases to Oracle! Every time you achieve specialization we'd like to support you getting the message out! Press guidelines and a submission link can be found on the OPN Portal here.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • Our GoDaddy web server is drowning in temp files!!

    - by temp file guy
    We have a virtual dedicated server with a fairly large amount of traffic. We use GoDaddy using CPanel. We have 10GIG of space of which about 80% is not our content but logs and server utilities. Godaddy support is evasive and they are trying to encourage us to migrate to new service with 15GIG. Reviewing the large files we found the following: We have a ton old TMP files at this directory. /public_html/files/TMP/FILE_PERSISTANCE_PROVIDER: (no access) some large files in these directories. /usr/local/apache/logs/ - suphp_log (220M) - access_log (7M) - error_log (5M) /usr/local/apache/domlogs/ (no access) /usr/local/cpanel/ (no access) /usr/local/cpanel-rollback /tmp Questions: What can we safely delete or truncate? How can we change permissions on files with no access to delete? Is there utility to monitor and clean up temp files Other files/programs that we can delete? thanks!

    Read the article

  • how to? 1 domain name, 1 ISP Static IP, 1 router, 3 physical web Servers

    - by buliwyf
    I have 1 Static IP from my ISP, 58.59.60.61 I have 3 local physical web servers: Win2008 IIS 7, local IP 192.168.10.11, mydomain.com Ubuntu Apache2, local IP 192.168.10.12, subdomain1.mydomain.com Win2003 IIS 6, local IP 192.168.10.13, subdomain2.mydomain.com I have 1 domain name, mydomain.com. It is configured this way: Host(A), @, 58.59.60.61 Host(A), subdomain1, 58.59.60.61 Host(A), subdomain2, 58.59.60.61 My router is a pfSense box. It forwards all port 80 traffic to a group alias called "WebServers," which is my 3 web server IP's. This setup should work right? I believe I need to set the "host header names" in my web servers. In IIS I know how to do this. How do I do this in Apache2?

    Read the article

  • Completely remove MySQL from macbook pro

    - by mike
    Im prety sure i completely removed mysql from my system, except for one thing. When i type mysql in the command line i get this bash: /opt/local/bin/mysql5: No such file or directory How is it still recognizing where it thinks mysql should be? I'm trying to build it myself in /usr/local, and when i do install it there, i still get that error message for it looking for it in opt/local.

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >