Search Results

Search found 8790 results on 352 pages for 'known hosts'.

Page 223/352 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Attach to Process in Visual Studio

    - by Daniel Moth
    One option for achieving step 1 in the Live Debugging process is attaching to an already running instance of the process that hosts your code, and this is a good place for me to talk about debug engines. You can attach to a process by selecting the "Debug" menu and then the "Attach To Process…" menu in Visual Studio 11 (Ctrl+Alt+P with my keyboard bindings), and you should see something like this screenshot: I am not going to explain this UI, besides being fairly intuitive, there is good documentation on MSDN for the Attach dialog. I do want to focus on the row of controls that starts with the "Attach to:" label and ends with the "Select..." button. Between them is the readonly textbox that indicates the debug engine that will be used for the selected process if you click the "Attach" button. If you haven't encountered that term before, read on MSDN about debug engines. Notice that the "Type" column shows the Code Type(s) that can be detected for the process. Typically each debug engine knows how to debug a specific code type (the two terms tend to be used interchangeably). If you click on a different process in the list with a different code type, the debug engine used will be different. However note that this is the automatic behavior. If you believe you know best, or more typically you want to choose the debug engine for a process using more than one code type, you can do so by clicking the "Select..." button, which should yield a "Select Code Type" dialog like this one: In this dialog you can switch to the debug engine you want to use by checking the box in front of your desired one, then hit "OK", then hit "Attach" to use it. Notice that the dialog suggests that you can select more than one. Not all combinations work (you'll get an error if you select two incompatible debug engines), but some do. Also notice in the list of debug engines one of the new players in Visual Studio 11, the GPU debug engine - I will be covering that on the C++ AMP team blog (and no, it cannot be combined with any others in this release). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Are You In The Know About Knowledge?

    - by [email protected]
    "Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information on it." To me, this simple and elegant quote from the great English author Samuel Johnson is a reflection of Oracle's knowledge base strategy. The knowledge base in the My Oracle Support portal (https://support.oracle.com) hosts nearly a half million documents, including how-to instructions, problem-solution descriptions, code samples, FAQs, critical alerts, technical whitepapers, and so on. AutoVue's footprint in the Oracle knowledge base - although relatively small at just around 400 documents - is a steadily-expanding assortment of valuable info. This information is designed to complement what you have already learned from the AutoVue documentation, or in some cases, to examine topics not yet covered in the documentation. Similar to the documentation, the knowledge base is one of the highest-value self-service avenues, since it delivers answers in real-time and is driven by the topics most relevant to customers. There are many different ways to leverage the AutoVue knowledge content, or what Oracle often refers to as "KM Notes": 1. Knowledge Browser: To browse the knowledge hierarchy, click on the 'Knowledge' tab at the top of the My Oracle Support webpage. In the list of product areas at the left, click on 'More Applications', then on 'Oracle AutoVue'. From here, you can either view the full set of KM Notes under the AutoVue product family (AutoVue, VueLink, Web Services, Document Print Services, etc) by clicking on 'All of Oracle AutoVue', or you can drill down further by clicking on 'Enterprise Visualization'. 2. Search: To execute simple keyword searches, use the Search bar at the top-right of the My Oracle Support webpage: 3. Advanced Search: Beside the same Search bar at the top-right of the My Oracle Support webpage, click on the 'Advanced' link in order to increase your control over the search string as well as the product to search against: 4. In your Dashboard: By clicking on the 'Customize' link at the top-right of the Dashboard page in My Oracle Support, you can drag & drop multiple "Knowledge Articles" widgets onto your dashboard. Then, click on the pencil icon at the top-right of the widget to customize it by product. This allows you to keep an active monitor on the most recently updated KM Notes across any product: 5. During SR Creation: As you submit a new Service Request, after entering the product information, SR title, and SR description, you will be presented with a frame at the left containing KM Note suggestions based on the information entered: Let Oracle know what you think! If you like or dislike an article, or would like to comment on how easy/difficult it was to find the article, click on the "Rate this document" link at the bottom of the KM Note. Similarly, during SR creation if one of the suggested KM Notes resolves your question/issue, you can click the "This article solved my problem" link at the bottom of the page. I hope these approaches improve your ability find knowledge content within the My Oracle Support portal, and I encourage you to continue to build your knowledge to further your success with the AutoVue product family.

    Read the article

  • Running Apache and Tomcat together on different subdomains?

    - by Ritesh M Nayak
    Posted this on ServerFault but didn't get a response. Hoping I will have better luck on the Ubuntu site. I have been trying to get this working the whole of today. I have a server which resolves to the domain example.com . This is running Apache2 and Tomcat 6. The requirement is to direct requests to example.com to apache2 and app.example.com to Tomcat. I know I have to do a VirtualHost proxy pass for this to work. Here are the settings on my server. /etc/hosts file looks something like this 127.0.0.1 localhost localhost.localdomain example.com app.example.com I have two virtual host files for the different domains in /etc/apache2/sites-enabled /etc/apache2/sites-enabled/example.com looks like this <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin webmaster@localhost ServerName example.com ServerAlias www.example.com DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/sites-enabled/app.example.com file looks like this <VirtualHost *:80> ServerName app.example.com ServerAlias www.app.example.com ProxyPreserveHost On ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 </VirtualHost> mod_proxy and mod_rewrite are both enabled on the apache instance. I have a CNAME entry for both example.com and app.example.com. When accessing app.example.com, I get an 403 forbidden, saying I have no access to / on the server. What am I doing wrong?

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Error Using 32 vs. 64 bit SharePoint 2007 DLLs with PowerShell

    - by Brian Jackett
    Next time you fire up PowerShell to work with the SharePoint API make sure you launch the proper bit version of PowerShell.  Last week I had an interesting error that led to this blog post.  Travel back in time a little bit with me to see where this 32 vs. 64 bit debate started. History     Ever since the first pre-beta bits of Office 2010 landed in my lap I have been questioning whether it’s better to run 32 or 64 bit applications on a 64 bit host operating system.  In relation to Office 2010 I heard a number of arguments for 32 bit including this link from the Office 2010 Engineering team.  Given my typical usage scenarios 32 bit seemed the way to go since I wasn’t a “super RAM hungry” Excel user or the like. The Problem     Since I had chosen 32 bit Office 2010, I tried to stick with 32 bit version of other programs that I run assuming the same benefits and rules applied to other applications.  This is where I was wrong.  Last week I was attempting to use 32 bit PowerShell ISE (Integrated Scripting Environment) on a 64 bit WSS 3.0 server.  When trying to reference the 64 bit SharePoint DLLs I got the following errors about not being able to find the web application.     I have run into these errors when I have hosts file issues or improper permissions to the farm / site collection but these were not the case.  After taking a quick spin around the interwebs I ran across the below forum post comment and another MSDN forum reply that explained the error.  Turns out that sometimes it’s not possible to run 32 bit applications against a 64 bit OS / farm / assembly / etc. …the problem could also be because your SharePoint is 64-Bit but your app is running in 32-bit mode     I quickly exited 32 bit PowerShell ISE and ran the same code under 64 bit PowerShell ISE.  All errors were gone and the script ran successfully.   Conclusion     The rules of 32 vs. 64 bit interoperability do not always apply evenly across all applications and scenarios.  In my case I wasn’t able to run 32 bit PowerShell against 64 bit SharePoint DLLs.  I’m updating all of my links and shortcuts to use 64 bit PowerShell where appropriate.  I’m quite surprised it has taken me this long to run into this error, but sometimes blind luck is all that keeps you from running into errors.  Lesson learned and hopefully this can benefit you as well.  Happy SharePointing all!         -Frog Out   Links http://blogs.technet.com/b/office2010/archive/2010/02/23/understanding-64-bit-office.aspx http://social.msdn.microsoft.com/Forums/en-US/sharepointdevelopment/thread/a732cb83-c2ef-4133-b04e-86477b72bbe3/ http://stackoverflow.com/questions/266255/filenotfoundexception-with-the-spsite-constructor-whats-the-problem

    Read the article

  • Attach to Process in Visual Studio

    - by Daniel Moth
    One option for achieving step 1 in the Live Debugging process is attaching to an already running instance of the process that hosts your code, and this is a good place for me to talk about debug engines. You can attach to a process by selecting the "Debug" menu and then the "Attach To Process…" menu in Visual Studio 11 (Ctrl+Alt+P with my keyboard bindings), and you should see something like this screenshot: I am not going to explain this UI, besides being fairly intuitive, there is good documentation on MSDN for the Attach dialog. I do want to focus on the row of controls that starts with the "Attach to:" label and ends with the "Select..." button. Between them is the readonly textbox that indicates the debug engine that will be used for the selected process if you click the "Attach" button. If you haven't encountered that term before, read on MSDN about debug engines. Notice that the "Type" column shows the Code Type(s) that can be detected for the process. Typically each debug engine knows how to debug a specific code type (the two terms tend to be used interchangeably). If you click on a different process in the list with a different code type, the debug engine used will be different. However note that this is the automatic behavior. If you believe you know best, or more typically you want to choose the debug engine for a process using more than one code type, you can do so by clicking the "Select..." button, which should yield a "Select Code Type" dialog like this one: In this dialog you can switch to the debug engine you want to use by checking the box in front of your desired one, then hit "OK", then hit "Attach" to use it. Notice that the dialog suggests that you can select more than one. Not all combinations work (you'll get an error if you select two incompatible debug engines), but some do. Also notice in the list of debug engines one of the new players in Visual Studio 11, the GPU debug engine - I will be covering that on the C++ AMP team blog (and no, it cannot be combined with any others in this release). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

  • ArchBeat Link-o-Rama for July 3, 2013

    - by Bob Rhubart
    Industrial SOA Chapter 5: Enterprise Service Bus Enterprise Service Bus, the fifth and latest addition to the Industrial SOA article series, answers some of the most important questions surrounding the use of an ESB. Industrial SOA Chapter 4: SOA Maturity The fourth article in the Industrial SOA series, SOA Maturity offers "an exploration of the fundamentals of applying a factory approach to modern service-oriented software development." Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4 | Mark Rittman Oracle ACE Director Mark Rittman's post revisits "the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment." Part 1 - 12c Database and WLS - Overview | Steve Felts Steve Felts shares a handy table that "maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases." Developers WebCast: Deploy Highly-Available Custom Services on Your Data Grid Products - July 11 Oracle Coherence Sr. Architect Brian Oliver hosts this free July 11 webcast for developers to show you how to "create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences." A checklist for OIM go live | Daniel Gralewski FMW A-Team solution architect Daniel Gralewski's list is intended to complement Oracle Identity Manager. His post "provides tips on a few topics that are not part of the documentation." How Many ODI Master Repositories Should We Have? | Christophe Dupupet FMW solution architect Christophe Dupupet provides a simple along with best practices for the architecture of ODI repositories in a corporate environment. Distinguish EA from enterprise wide solution architecture | John Wu My buddy Tony Meyer, who did a great presentation recently at the Cleveland-area Enterprise Architect / Solution Architect Meet-up, recommends this Toolbox article by John Wu. YouTube: Oracle Fusion Applications Developer Tips If you work with Fusion Applications you'll want to check out the tips and tricks for building extensions, customizations, and integrations now available on the new Oracle Fusion Middleware Developer Relations YouTube channel. The CX Factor: Wooing and wowing customers in the digital age "There was a time when 'customer experience' was limited to what happened to you when you walked into a store, restaurant, or other place of business or when you called a business on the telephone. But that was back when you could still smoke on airplanes." Thought for the Day "If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be 'meetings.' " — Dave Barry (Born July 3, 1947) Source: brainyquote.com

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • Networking Guidelines

    - by ACShorten
    One of the things I have noticed in my years in IT is the changes in networking. In the past networking was pretty simple with the host name and name resolution (via DNS) being pretty simple. Some sites still use this simple networking setup. These days, more complex name resolution, proxies, firewalls, demarcation nd virtualization, can make networking more complex. This can cause issues when installing products with in built networking that can frustrate even seasoned veterans. I have put together a few basic guidelines to hopefully help along with product installation and getting a product to operate in a somewhat complex network setup. All the components of the product (including the infrastructure) need to communicate via a network (even it is within a local machine/host). Ensure any host names referred to within configuration files are accessible via your networking setup. This may mean defining the hosts to the machines, to the DNS for name resolution and even your firewall to allow machines to communicate within your network. Make sure the ports used for any of the infrastructure are accessible (even through your firewall) and are unique within the host. Host duplication can cause the product to fail on startup as the port is already in use. If there are still issues, consider using localhost as your host name. I have used this in so many situations that I tend to use it now as a default anytime I install anything myself. Most Oracle products suggest to use localhost when using dynamic host or dynamic IP addresses and this is no different for the Oracle Utilities Application Framework. If you do use localhost then installing a Loopback Adapter for the operating system is recommended to force networking to a minimum. Usually localhost resolves to 127.0.0.1. When using multiple network connections, especially in a virtualized environment, ensure the host and ports used are relevent for the network cards you have setup. One of the common issues is finding the product is using a vierualized network card only to find that it is not setup for correct networking. If you are using the batch component, do not forget to ensure that the multicast protocol is enabled on your host and that the multicast address and port number specified are valid and accessible from all machines in the batch cluster (if clustering used). The same advice applies if you are using unicast where each host/port combination should be accessible. Hopefully these basic networking recommendations will help minimize any networking issues you might encounter.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • SOASuite 11.1.1.4 : Error Logging into BPM11g Composer?

    - by angelo.santagata
    Hey all, I’ve just installed SOA Suite 11.1.1.4 and noticed a few funnies which people might hit, thankfully each of them have an easy solution.   1. Some applications are installed but dont appear to work? If when you install SOASuite you may notice that the following applications dont appear to work, however they do appear as deployments in Weblogic Server Console e.g.  SOA Composer (composer), FMW Welcome Page Application (11.1.0.0.0) and some of the adaptors. If they appear in the deployments list state as “installed” and not Active, then its likely that they haven't been targeted to a specific server.   The solution is to target the application the desired managed server , e.g. AdminServer in a development environment. This is done by selecting the application, tab “Targets”, select all components, Button[change Targets] and select the appropriate server. This change can be done without restarting the Weblogic Server 2. You might find that when you try to log into the BPM Composer at http://machine:7001/bpm/composer , the login screen will appear but you cant log in. The error log might mention the following   The solution to this is two fold, a) When creating the domain, avoid using 127.0.0.1 as the Listener address, or “Any Addresses”, if this is a development machine create an alias in your /etc/hosts file and then use this alias in the domain creation wizard. e.g.   my host file contains an entry     mypc     127.0.0.1   And in the Fusion middleware Configuration wizard   Twill then work!   if it still doesnt you can try setting the ServerURL attribute to http://mypc in the SoaInfraMBean instead of blank. This is accomplished by using Enterprise Manager. Use the System MBean Browser to navigate to Application Defined MBeans->oracle.as.soainfra.config->[ server]->SoaInfraConfig->soa-infra. Then changing the value of 'ServerURL' to http://mypc   Failing that give support a call….

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • SCVMM – Round 2 – How to create a Private Cloud using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/28/scvmm--round-2--how-to-create-a-private.aspxHave you ever seen "A Bridge too far" movie? To not to wake up a click too far, it is good to script some tasks. Yes of course we can follow wizards, but some of us want to be warriorsJ. A small tip, take a look on credentials and system GUID examples. I don't know how about you, but for me it will be really useful in the future.    # credents$credential = Get-CredentialNew-SCRunAsAccount -Name "TESTDOMAIN\Administrator" -Credential $credential #storage $opsMgrServerCredential = Get-SCRunAsAccount -Name "TESTDOMAIN\Administrator"New-SCStorageClassification -Name "Bronze" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Silver" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Gold" -Description "" –RunAsynchronously # add a shared storageFind-SCComputer -ComputerName "dc.TESTDOMAIN.net"Add-SCStorageProvider -AddWindowsNativeWmiProvider -Name "dc.TESTDOMAIN.net" -RunAsAccount $opsMgrServerCredential -ComputerName "dc.TESTDOMAIN.net"$fileServer = Get-SCStorageFileServer "dc.TESTDOMAIN.net"$fileShares = @()$fileShares += Get-SCStorageFileShare -Name "VMMLibrary"Set-SCStorageFileServer -StorageFileServer $fileServer -AddStorageFileShareToManagement $fileShares –RunAsynchronously #fabric network$logicalNetwork = New-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinitionIsolation $false -EnableNetworkVirtualization $true -UseGRE $true -IsPVLAN $false$allHostGroups = @()$allHostGroups += Get-SCVMHostGroup -Name "All Hosts"$allSubnetVlan = @()$allSubnetVlan += New-SCSubnetVLan -Subnet "10.0.0.0/24" -VLanID 0New-SCLogicalNetworkDefinition -Name "TESTDOMAIN-Service-Network_0" -LogicalNetwork $logicalNetwork -VMHostGroup $allHostGroups -SubnetVLan $allSubnetVlan #IP pool$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$logicalNetworkDefinition = Get-SCLogicalNetworkDefinition -LogicalNetwork $logicalNetwork -Name "TESTDOMAIN-Service-Network_0" # Gateways$allGateways = @()$allGateways += New-SCDefaultGateway -IPAddress "10.0.0.1" –Automatic# DNS servers $allDnsServer = @("10.0.0.1")# DNS suffixes$allDnsSuffixes = @("TESTDOMAIN.net")# WINS servers$allWinsServers = @()New-SCStaticIPAddressPool -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinition $logicalNetworkDefinition -Subnet "10.0.0.0/24" -IPAddressRangeStart "10.0.0.51" -IPAddressRangeEnd "10.0.0.75" -DefaultGateway $allGateways -DNSServer $allDnsServer -DNSSuffix "" -DNSSearchSuffix $allDnsSuffixes –RunAsynchronously #Hyper-V Virtual Networks$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" $vmNetwork = New-SCVMNetwork -Name "TESTDOMAIN-VMN" -LogicalNetwork $logicalNetwork -IsolationType "WindowsNetworkVirtualization" -CAIPAddressPoolType "IPV4" -PAIPAddressPoolType "IPV4"Write-Output $vmNetwork$subnet = New-SCSubnetVLan -Subnet "10.0.0.0/24"New-SCVMSubnet -Name "Con-SN" -VMNetwork $vmNetwork -SubnetVLan $subnet # bind VLAN with the Network Adapter$vmHost = Get-SCVMHost -ComputerName "VMM01.TESTDOMAIN.net"$vmHostNetworkAdapter = Get-SCVMHostNetworkAdapter -VMHost $vmHost #-Name "Intel 21140-Based PCI Fast Ethernet Adapter (Emulated)"Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -Description "" -AvailableForPlacement $true -UsedForManagement $true $logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -AddOrSetLogicalNetwork $logicalNetworkSet-SCVMHost -VMHost $vmHost -RunAsynchronously -NumaSpanningEnabled $true #Create a Private Cloud$Guid = [System.Guid]::NewGuid()Set-SCCloudCapacity -JobGroup $Guid -UseCustomQuotaCountMaximum $false -UseMemoryMBMaximum $false -UseCPUCountMaximum $false -UseStorageGBMaximum $false -UseVMCountMaximum $false -CustomQuotaCount 10 -MemoryMB 10240 -CPUCount 10 -StorageGB 386 -VMCount 10$resources = @()$resources += Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$resources += Get-SCLoadBalancer -Manufacturer "Microsoft"$readonlyLibraryShares = @()$readonlyLibraryShares += Get-SCLibraryShare | where { $_.LibraryServer.Name -eq "dc.TESTDOMAIN.net" -and $_.Name -eq "VMMLibrary" }$addCapabilityProfiles = @()$addCapabilityProfiles += Get-SCCapabilityProfile -Name "Hyper-V"$Guid2 = [System.Guid]::NewGuid()Set-SCCloud -JobGroup $Guid2 -RunAsynchronously -AddCloudResource $resources -AddReadOnlyLibraryShare $readonlyLibraryShares -AddCapabilityProfile $addCapabilityProfiles$hostGroups = @()$hostGroups += Get-SCVMHostGroup -Name "TESTDOMAIN"New-SCCloud -VMHostGroup $hostGroups -Name "TESTDOMAIN-Cloud" -Description "" –RunAsynchronously

    Read the article

  • WAIT-VHUB ? Whats Going On ?

    - by Neeraj Gupta
    I know many of you have been working on Oracle's Exalogic and other Engineered Systems. With partitions enabled now, things have gone multi dimension. But its fun. Isn't it ? While you have some EoIB configurations together with InfiniBand partitions, the VNICs are not coming up and staying in WAIT-VHUB state ?  Chances are that you have forgot to add InfiniBand Gateway switches' Bridge-X port GUIDs to your partition. These must be added as FULL members for EoIB to work properly. VHUB means a virtual hub in EoIB. Bridge-x is the access point for hosts to work over EoIB so thats why it must be a full member in partition. Step 1: Find out the port GUIDs of your bridge-x devices in IB Gateway switch. # showgwports INTERNAL PORTS: --------------- Device   Port Portname  PeerPort PortGUID           LID    IBState  GWState --------------------------------------------------------------------------- Bridge-0  1   Bridge-0-1    4    0x0010e00c1b60c001 0x0002 Active   Up Bridge-0  2   Bridge-0-2    3    0x0010e00c1b60c002 0x0006 Active   Up Bridge-1  1   Bridge-1-1    2    0x0010e00c1b60c041 0x0026 Active   Up Bridge-1  2   Bridge-1-2    1    0x0010e00c1b60c042 0x002a Active   Up Step 2: Add these port GUIDs to the IB partition associated with EoIB. Login to master SM switch for this task. # smpartition start # smpartition add -pkey <PKey> -port <port GUID> -m full # smpartition commit Enjoy ! 

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >