Search Results

Search found 21939 results on 878 pages for 'victor may'.

Page 25/878 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • How to deal with configuration style warnings occuring from TexLive 2012 installation?

    - by JJD
    I followed the advice of izx on how to install TexLive 2012 using the texlive-backports PPA. Before I started I removed all TexLive-related packages. The installation finished and everything seems to work fine. The only thing I noticed are some warnings in the output of the installer. Here is an excerpt of the output: Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg There are more of that kind in the rest of the output: $ sudo apt-get install texlive Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym Suggested packages: texlive-doc-en purifyeps chktex latexmk dvipng xindy dvidvi fragmaster lacheck latexdiff t1utils The following NEW packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym 0 upgraded, 29 newly installed, 0 to remove and 17 not upgraded. Need to get 0 B/274 MB of archives. After this operation, 450 MB of additional disk space will be used. Do you want to continue [Y/n]? Preconfiguring packages ... Selecting previously unselected package tex-common. (Reading database ... 290206 files and directories currently installed.) Unpacking tex-common (from .../tex-common_3.13~ubuntu12.04.1_all.deb) ... Selecting previously unselected package lmodern. Unpacking lmodern (from .../lmodern_2.004.1-5~precise1_all.deb) ... Selecting previously unselected package tex-gyre. Unpacking tex-gyre (from .../tex-gyre_2.004.1-4~precise1_all.deb) ... Selecting previously unselected package libgraphite3. Unpacking libgraphite3 (from .../libgraphite3_1%3a2.3.1-0.2build1_amd64.deb) ... Selecting previously unselected package libkpathsea6. Unpacking libkpathsea6 (from .../libkpathsea6_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package libptexenc1. Unpacking libptexenc1 (from .../libptexenc1_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-common. Unpacking texlive-common (from .../texlive-common_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-binaries. Unpacking texlive-binaries (from .../texlive-binaries_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-doc-base. Unpacking texlive-doc-base (from .../texlive-doc-base_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-base. Unpacking texlive-base (from .../texlive-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base. Unpacking texlive-latex-base (from .../texlive-latex-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended. Unpacking texlive-latex-recommended (from .../texlive-latex-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package latex-xcolor. Unpacking latex-xcolor (from .../latex-xcolor_2.11-1_all.deb) ... Selecting previously unselected package pgf. Unpacking pgf (from .../archives/pgf_2.10-1_all.deb) ... Selecting previously unselected package latex-beamer. Unpacking latex-beamer (from .../latex-beamer_3.10-1_all.deb) ... Selecting previously unselected package texlive-generic-recommended. Unpacking texlive-generic-recommended (from .../texlive-generic-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks. Unpacking texlive-pstricks (from .../texlive-pstricks_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package prosper. Unpacking prosper (from .../prosper_1.00.4+cvs.2007.05.01-4_all.deb) ... Selecting previously unselected package ps2eps. Unpacking ps2eps (from .../ps2eps_1.68-1_amd64.deb) ... Selecting previously unselected package ttf-marvosym. Unpacking ttf-marvosym (from .../ttf-marvosym_0.1+dfsg-2_all.deb) ... Selecting previously unselected package texlive-fonts-recommended. Unpacking texlive-fonts-recommended (from .../texlive-fonts-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive. Unpacking texlive (from .../texlive_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-extra-utils. Unpacking texlive-extra-utils (from .../texlive-extra-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-font-utils. Unpacking texlive-font-utils (from .../texlive-font-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-fonts-recommended-doc. Unpacking texlive-fonts-recommended-doc (from .../texlive-fonts-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base-doc. Unpacking texlive-latex-base-doc (from .../texlive-latex-base-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended-doc. Unpacking texlive-latex-recommended-doc (from .../texlive-latex-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks-doc. Unpacking texlive-pstricks-doc (from .../texlive-pstricks-doc_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package tipa. Unpacking tipa (from .../tipa_2%3a1.3-17~precise1_all.deb) ... Processing triggers for doc-base ... Processing 5 added doc-base files... Registering documents with scrollkeeper... Processing triggers for man-db ... Processing triggers for fontconfig ... Processing triggers for install-info ... Setting up tex-common (3.13~ubuntu12.04.1) ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call texlive-base is not ready, skipping fmtutil-sys --all call Setting up lmodern (2.004.1-5~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tex-gyre (2.004.1-4~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up libgraphite3 (1:2.3.1-0.2build1) ... Setting up libkpathsea6 (2012.20120628-1~ubuntu12.04.1) ... Setting up libptexenc1 (2012.20120628-1~ubuntu12.04.1) ... Setting up texlive-common (2012.20120611-3~ubuntu12.04.1) ... Setting up texlive-binaries (2012.20120628-1~ubuntu12.04.1) ... update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode. update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode. mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Building format(s) --refresh. This may take some time... done. Setting up texlive-doc-base (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up ps2eps (1.68-1) ... Setting up ttf-marvosym (0.1+dfsg-2) ... Setting up texlive-fonts-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-pstricks-doc (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call Setting up texlive-base (2012.20120611-3~ubuntu12.04.1) ... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. /usr/bin/tl-paper: setting paper size for dvips to a4. /usr/bin/tl-paper: setting paper size for dvipdfmx to a4. /usr/bin/tl-paper: setting paper size for xdvi to a4. /usr/bin/tl-paper: setting paper size for pdftex to a4. Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all. This may take some time... done. Processing triggers for tex-common ... Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-generic-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-fonts-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-extra-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-font-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all --cnffile /etc/texmf/fmt.d/10texlive-latex-base.cnf. This may take some time... done. Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-pstricks (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tipa (2:1.3-17~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up prosper (1.00.4+cvs.2007.05.01-4) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Setting up texlive (2012.20120611-3~ubuntu12.04.1) ... Setting up latex-xcolor (2.11-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Setting up pgf (2.10-1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Setting up latex-beamer (3.10-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place What exactly is 10lmodern.cfg good for? How can I prevent this warnings? Here is the output of sudo update-updmap: $ sudo update-updmap Regenerating '/var/lib/texmf/updmap.cfg-DEBIAN'... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg done. Regenerating '/var/lib/texmf/updmap.cfg-TEXLIVEDIST'... done. update-updmap has updated the following file(s): /var/lib/texmf/updmap.cfg-DEBIAN /var/lib/texmf/updmap.cfg-TEXLIVEDIST If you want to enable the map files with this new file, you should run updmap-sys or updmap.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Migration of .NET COM object to 64 bit.

    - by Victor Ronin
    Hi, We have C++ application which uses several COM object. COM object are .NET based (using COM Interop). I need to migrate application to 64 bit. I specifically need C++ application to be 64 bit. I don't want to recompile all of .NET com object to 64 bit and deliver two sets of DLL's (32 bit and 64 bit). I was investigating and found that I can load 32 bit COM Dll's in 32 bit surrogate process using (DllSurrogate in registry). I know how to do that, but it means that all COM objects will become out of process. In the C++ I had the code: CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_INPROC_SERVER, IID_SomeInterface, (void**)&pobj); It worked fine, but as soon as I switch to CLSCTX_LOCAL_SERVER (and add registry keys for DllSurrogate), it can't find interfaces (error 0x80004002). I checked registry and found out that when .NET COM DLL is registered, it adds ClsID registry keys, but doesn't add Interface and TypeLib registry key. The question is, how to create these registry keys for .NET COM? Regards, Victor

    Read the article

  • Take Advantage of Oracle's Ongoing Assurance Effort!

    - by eric.maurice
    Hi, this is Eric Maurice again! A few years ago, I posted a blog entry, which discussed the psychology of patching. The point of this blog entry was that a natural tendency existed for systems and database administrators to be reluctant to apply patches, even security patches, because of the fear of "breaking" the system. Unfortunately, this belief in the principle "if it ain't broke, don't fix it!" creates significant risks for organizations. Running systems without applying the proper security patches can greatly compromise the security posture of the organization because the security controls available in the affected system may be compromised as a result of the existence of the unfixed vulnerabilities. As a result, Oracle continues to strongly recommend that customers apply all security fixes as soon as possible. Most recently, I have had a number of conversations with customers who questioned the need to upgrade their highly stable but otherwise unsupported Oracle systems. These customers wanted to know more about the kind of security risks they were exposed to, by running obsolete versions of Oracle software. As per Oracle Support Policies, Critical Patch Updates are produced for currently supported products. In other words, Critical Patch Updates are not created by Oracle for product versions that are no longer covered under the Premier Support or Extended Support phases of the Lifetime Support Policy. One statement used in each Critical Patch Update Advisory is particularly important: "We recommend that customers upgrade to a supported version of Oracle products in order to obtain patches. Unsupported products, releases and versions are not tested for the presence of vulnerabilities addressed by this Critical Patch Update. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities." The purpose of this warning is to inform Oracle customers that a number of the vulnerabilities fixed in each Critical Patch Update may affect older versions of a specific product line. In other words, each Critical Patch Update provides a number of fixes for currently supported versions of a given product line (this information is listed for each bug in the Risk Matrices of the Critical Patch Update Advisory), but the unsupported versions in the same product line, while they may be affected by the vulnerabilities, will not receive the fixes, and are therefore vulnerable to attacks. The risk assumed by organizations wishing to remain on unsupported versions is amplified by the behavior of malicious hackers, who typically will attempt to, and sometimes succeed in, reverse-engineering the content of vendors' security fixes. As a result, it is not uncommon for exploits to be published soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Let's consider now the nature of the vulnerabilities that may exist in obsolete versions of Oracle software. A number of severe vulnerabilities have been fixed by Oracle over the years. While Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update, it should be assumed that a number of the vulnerabilities fixed with the Critical Patch Update program do exist in unsupported versions (regardless of the product considered). The most severe vulnerabilities fixed in past Critical Patch Updates may result in full compromise of the targeted systems, down to the OS level, by remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 10.0) or almost as critically, may result in the compromise of the affected systems (without compromising the underlying OS) by a remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 7.5). Such vulnerabilities may result in complete takeover of the targeted machine (for the CVSS 10.0), or may result in allowing the attacker the ability to create a denial of service against the affected system or even hijacking or stealing all the data hosted by the compromised system (for the CVSS 7.5). The bottom line is that organizations should assume the worst case: that the most critical vulnerabilities are present in their unsupported version; therefore, it is Oracle's recommendation that all organizations move to supported systems and apply security patches in a timely fashion. Organizations that currently run supported versions but may be late in their security patch release level can quickly catch up because most Critical Patch Updates are cumulative. With a few exceptions noted in Oracle's Critical Patch Update Advisory, the application of the most recent Critical Patch Update will bring these products to current security patch level and provide the organization with the best possible security posture for their patch level. Furthermore, organizations are encouraged to upgrade to most recent versions as this will greatly improve their security posture. At Oracle, our security fixing policies state that security fixes are produced for the main code line first, and as a result, our products benefit from the mistakes made in previous version(s). Our ongoing assurance effort ensures that we work diligently to fix the vulnerabilities we find, and aim at constantly improving the security posture our products provide by default. Patch sets include numerous in-depth fixes in addition to those delivered through the Critical Patch Update and, in certain instances, important security fixes require major architectural changes that can only be included in new product releases (and cannot be backported through the Critical Patch Update program). For More Information: • Mary Ann Davidson is giving a webcast interview on Oracle Software Security Assurance on February 24th. The registration link for attending this webcast is located at http://event.on24.com/r.htm?e=280304&s=1&k=6A7152F62313CA09F77EBCEEA9B6294F&partnerref=EricMblog • A blog entry discussing Oracle's practices for ensuring the quality of Critical patch Updates can be found at http://blogs.oracle.com/security/2009/07/ensuring_critical_patch_update_quality.html • The blog entry "To patch or not to patch" is located at http://blogs.oracle.com/security/2008/01/to_patch_or_not_to_patch.html • Oracle's Support Policies are located at http://www.oracle.com/us/support/policies/index.html • The Critical Patch Update & Security Alert page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Solr error; Anybody know what this means?

    - by Camran
    I am installing solr on my VPS (Ubuntu 9.10) via PuTTY. First, I thought about installing Solr with Tomcat, but then after installing tomcat, I changed my mind and went for the Jetty which comes with Solr. Now that I have setup everything on my Server, and try to start the "start.jar" file, I get some errors... Here is some text from the log file: 2010-05-29 00:22:42.074::INFO: jetty-6.1.3 2010-05-29 00:22:42.134::INFO: Extract jar:file:/var/www/webapps/solr.war!/ to /var/www/work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' May 29, 2010 12:22:42 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.CoreContainer$Initializer initialize INFO: looking for solr.xml: /var/www/solr/solr.xml May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' Anybody know what this is? Thanks

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • How to change my W2k8 System Partition?

    - by Chris May
    On my Windows 2008 server, my C: is 1.5 TB, and the partition is marked as: Healthy (Boot, Page File, Active, Crash Dump, Primary Partition) and somehow I ended up with a 2GB D: that is marked as Healthy (System). On this D: drive are only a few MB worth of files (bootmgr, boot folder, bootsect.bak), but all Windows files are on the c:. I've done everything I can to remove the (System) mark. I tried using bcdedit, I tried marking the C:partition as "Active", I tried using bootsect.exe to assign the C: drive as the boot partition. Maybe I didn't do one of those steps correct, but I've tried everything I can. When I got my new Dell Poweredge T710, I didn't bother removing their 2 small drives before I put in my 2 new large drives. So I think when I installed W2k8 Server, maybe dell left some bootable partition on their drives to help me install the OS, but I never used it and just booted right from the install CD. Can anyone help me remove the (System) mark from the D: so I can remove the D: partition and still boot to the C:? I know I could remove the D: drives and reinstall windows, but I'm trying to avoid a total reinstall.

    Read the article

  • (Amazon AWS) EBS mount error: Stale NFS file handle

    - by May
    I have an EC2 instance that just went offline (cannot even be pinged) but is still reflected as operational. In an effort to retrieve data stored on an attached EBS, I did a forced detach of the mounted volume, launched a new instance, and tried attaching the EBS volume. However, I keep getting an error - mount: Stale NFS file handle whenever I do so. Did I just lose all my files?

    Read the article

  • Clear Fillable Online PDF Files

    - by May
    I filled in an online pdf form. After I finished, I closed the window thinking that the form will clear itself since I didn't save it. When I went back to the website and clicked on the form again, it still had the information that I entered. Other than manually deleting all the information on the form, is there another way of clearing the form?

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • Unexpected media key behavior on new Acer Aspire

    - by Morgan May
    I'm having weird issues with the media keys (play/pause, previous, next, etc.) on a new Acer Aspire laptop. This is the first Acer I've owned and also my first Windows 7 computer, so I'm not sure whether the behavior is a result of some hidden Acer process that I haven't rooted out yet, or some Windows 7 option that I'm not aware of, or something else. I'm experiencing two issues that I suspect are related. Both problems are intermittent but happen more often than not. The media player I'm using is Winamp. I'm pretty sure I've had the same problem when using other media players, but when I tried to verify that before posting this, I only had the problems with Winamp. Because the problems are intermittent, I'm not sure if that's significant. 1) When I press the Play/Pause media key, in addition to playing or pausing the media player, it brings up a little menu in the center of the screen that lists my removable drives (CD/DVD, USB drives, etc.). To make the menu go away I have to either click away from it or hit Escape. Selecting a drive on the menu doesn't seem to do anything. 2) When I press the Previous or Next media keys, it skips 2+ tracks instead of just one (the exact number seems to vary). I've poked around all the control panel options that I can find, and looked through all the utilities that came with the computer with no luck. There's nothing that I can find in the (very slim) documentation, either. I have a hunch that the problem is caused by whatever utility manages global hotkeys, but I haven't found any way to configure that. Any guidance would be greatly appreciated. UPDATE: It looks like Winamp was the culprit. I did have the problem when using other media players, but when I uninstalled Winamp, the problem went away. I'd like to use Winamp, but I can survive with other players.

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • Silverlight Cream for March 25, 2010 -- #820

    - by Dave Campbell
    In this Issue: René Schulte, Jeremy Likness, Hassan, Victor Gaudioso, SilverLaw, Mike Taulty, Phani Raj, Tim Heuer, Christian Schormann, Brad Abrams, David Anson, Diptimaya Patra, and Daniel Vaughan. Shoutouts: Last week, Koen Zwikstra announced Silverlight Spy at MIX10 Anand Iyer announced this for students on the Windows Team Blog: Be a Windows Phone 7 “Rockstar” Justin Angel blogged that Silverlight Isn't Fully Cross-Platform ... let him know if you think it's a yawn or important. On behalf of SilverlightShow, Cigdem Patlak posted MIX10: Laurent Bugnion on Silverlight adoption, WP7 and the EcoContest From SilverlightCream.com: Coding4Fun - Silverlight Real Time Face Detection René Schulte has a Coding 4 Fun article posted on facial recognition. Who better to be manipulating graphics like this than René? Sequential Asynchronous Workflows Part 2: Simplified Jeremy Likness follows up his previous post with another one that is 'simplified'. Remember his previous post began with a post on the Silverlight.net forum and Rob Eisenburg's MVVM presentation from MIX10 Windows Phone 7 Video Tutorial Hassan has a new video up on his AfricanGeek site, and that's a continuation of his previous WP7 video tutorial, adding a listbox and databinding it to the selected index of another listbox. The Los Angeles Silverlight Usergorup will be Streaming its March Meeting LIVE in Silverlight – Tonight! Victor Gaudioso used his Live Streaming knowledge to stream his User Group meeting last night from LA where Michael Washington presented on MVVM followed by Victor himself. That was last night. Today he has a couple of the videos up to view. Shining 3D Font Design - Silverlight 3 SilverLaw has a "Shining 3D Font" tutorial up, and a video on it here: New Video: How to create a 3D effect on a Silverlight 3 Textblock ... this is also available in the Expression Gallery. Silverlight 4 RC – Signing trusted apps with home made certificates Mike Taulty has a post up about building a hand-rolled cert to test out the XAP signing features, and then gives a nod to John Papa with a link to the Silverlight White Paper I've posted about before, because this info is in there as well. Developing a Windows Phone 7 Application that consumes OData Phani Raj has a tutorial up on consuming the NetFlix OData catalog on the WP7 emulator ... now *that* is cool! Make your Silverlight applications Speak to you with Microsoft Translator Tim Heuer used Silverlight to demonstrate Microsoft Translator as a speech synthesis tool using the Speak API included ... pretty cool, Tim ... lots of external links and code. Blend 4: About Path Layout, Sidebar – More About ListBox Than You Ever Wanted To Know Christian Schormann has another outstanding tutorial up on the ListBox and PathLayout in Expression Blend ... just check out the screen shots and you'll wanna read it! Silverlight 4 + RIA Services: Ready for Business: Updating Data in the Client This is the continuation of Brad Abrams' series on WCF RIA Services and is a tutorial on setting up to deal with updating the data. Tip: The CLR wrapper for a DependencyProperty should do its job and nothing more David Anson is posting some "Development Tips", and this is the first ... discussing making sure your DependencyProperty CLR wrapper stays on point... Create and Apply Theme Silverlight Application Diptimaya Patra has a tutorial up on creating and using themes. He states that "Themes are nothing but some predefined styles" ... check it out and see if it's really that easy :) Building a Windows Phone 7 Puzzle Game Daniel Vaughan has a great post up starting with installing all the tools and ending with a maze game for WP7 using XNA for sound... this is the first I've seen that integrates XNA (I think). Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • "Collection was modified; enumeration operation may not execute." on form disposal.

    - by cyclotis04
    "Collection was modified; enumeration operation may not execute." appears to be a common error with foreach loops, but I can't figure mine out. I have two classes of forms. One is begun on startup, and a button creates new instances of the second form, and displays them. When I close the secondary forms, I get an InvalidOperationException. FirstForm.cs public partial class FirstForm : Form { SecondForm frmSecond; ... private void button1_Click(object sender, EventArgs e) { frmSecond= new SecondForm (); frmSecond.Show(); } } SecondForm.designer.cs partial class SecondForm { ... protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); // InvalidOperationException thrown here. } }

    Read the article

  • Recommendations for books on career development

    - by Victor
    For a programmer that wishes to udnerstand principles of economics/how money flows, the basic principles of money exchanging hands..what are some good books/magazines/publications/journals? Please assume a beginner level willing to learn fast and hard. The objective is to understand why some jobs pay more than others. On the topic of books, are there some well known autobiographies of leading CTOs coming from a technical background?

    Read the article

  • What to Do When Windows Won’t Boot

    - by Chris Hoffman
    You turn on your computer one day and Windows refuses to boot — what do you do? “Windows won’t boot” is a common symptom with a variety of causes, so you’ll need to perform some troubleshooting. Modern versions of Windows are better at recovering from this sort of thing. Where Windows XP might have stopped in its tracks when faced with this problem, modern versions of Windows will try to automatically run Startup Repair. First Things First Be sure to think about changes you’ve made recently — did you recently install a new hardware driver, connect a new hardware component to your computer, or open your computer’s case and do something? It’s possible the hardware driver is buggy, the new hardware is incompatible, or that you accidentally unplugged something while working inside your computer. The Computer Won’t Power On At All If your computer won’t power on at all, ensure it’s plugged into a power outlet and that the power connector isn’t loose. If it’s a desktop PC, ensure the power switch on the back of its case — on the power supply — is set to the On position. If it still won’t power on at all, it’s possible you disconnected a power cable inside its case. If you haven’t been messing around inside the case, it’s possible the power supply is dead. In this case, you’ll have to get your computer’s hardware fixed or get a new computer. Be sure to check your computer monitor — if your computer seems to power on but your screen stays black, ensure your monitor is powered on and that the cable connecting it to your computer’s case is plugged in securely at both ends. The Computer Powers On And Says No Bootable Device If your computer is powering on but you get a black screen that says something like “no bootable device” or another sort of “disk error” message, your computer can’t seem to boot from the hard drive that Windows was installed on. Enter your computer’s BIOS or UEFI firmware setup screen and check its boot order setting, ensuring that it’s set to boot from its hard drive. If the hard drive doesn’t appear in the list at all, it’s possible your hard drive has failed and can no longer be booted from. In this case, you may want to insert Windows installation or recovery media and run the Startup Repair operation. This will attempt to make Windows bootable again. For example, if something overwrote your Windows drive’s boot sector, this will repair the boot sector. If the recovery environment won’t load or doesn’t see your hard drive, you likely have a hardware problem. Be sure to check your BIOS or UEFI’s boot order first if the recovery environment won’t load. You can also attempt to manually fix Windows boot loader problems using the fixmbr and fixboot commands. Modern versions of Windows should be able to fix this problem for you with the Startup Repair wizard, so you shouldn’t actually have to run these commands yourself. Windows Freezes or Crashes During Boot If Windows seems to start booting but fails partway through, you may be facing either a software or hardware problem. If it’s a software problem, you may be able to fix it by performing a Startup Repair operation. If you can’t do this from the boot menu, insert a Windows installation disc or recovery disk and use the startup repair tool from there. If this doesn’t help at all, you may want to reinstall Windows or perform a Refresh or Reset on Windows 8. If the computer encounters errors while attempting to perform startup repair or reinstall Windows, or the reinstall process works properly and you encounter the same errors afterwards, you likely have a hardware problem. Windows Starts and Blue Screens or Freezes If Windows crashes or blue-screens on you every time it boots, you may be facing a hardware or software problem. For example, malware or a buggy driver may be loading at boot and causing the crash, or your computer’s hardware may be malfunctioning. To test this, boot your Windows computer in safe mode. In safe mode, Windows won’t load typical hardware drivers or any software that starts automatically at startup. If the computer is stable in safe mode, try uninstalling any recently installed hardware drivers, performing a system restore, and scanning for malware. If you’re lucky, one of these steps may fix your software problem and allow you to boot Windows normally. If your problem isn’t fixed, try reinstalling Windows or performing a Refresh or Reset on Windows 8. This will reset your computer back to its clean, factory-default state. If you’re still experiencing crashes, your computer likely has a hardware problem. Recover Files When Windows Won’t Boot If you have important files that will be lost and want to back them up before reinstalling Windows, you can use a Windows installer disc or Linux live media to recover the files. These run entirely from a CD, DVD, or USB drive and allow you to copy your files to another external media, such as another USB stick or an external hard drive. If you’re incapable of booting a Windows installer disc or Linux live CD, you may need to go into your BIOS or UEFI and change the boot order setting. If even this doesn’t work — or if you can boot from the devices and your computer freezes or you can’t access your hard drive — you likely have a hardware problem. You can try pulling the computer’s hard drive, inserting it into another computer, and recovering your files that way. Following these steps should fix the vast majority of Windows boot issues — at least the ones that are actually fixable. The dark cloud that always hangs over such issues is the possibility that the hard drive or another component in the computer may be failing. Image Credit: Karl-Ludwig G. Poggemann on Flickr, Tzuhsun Hsu on Flickr     

    Read the article

  • My laptop can not access internet after setup bridge with wired NIC and wireless NIC

    - by Victor S
    I setup a bridge using wired NIC and wireless NIC in order to make wireless NIC as a Wi-Fi AP to share wired internet access, after setup successfully, the Wi-Fi AP works as my expectation, but my laptop can not access internet, please give me a hand. Thanks. My hostapd.conf $ cat hostapd.conf interface=wlan0 bridge=br0 driver=nl80211 ssid=myAP hw_mode=g channel=11 dtim_period=1 rts_threshold=2347 fragm_threshold=2346 macaddr_acl=0 auth_algs=3 ieee80211n=0 wpa=3 wpa_passphrase=PassWord wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP Setup steps: $ sudo killall hostapd hostapd: no process found $ sudo hostapd -B hostapd.conf Configuration file: hostapd.conf Using interface wlan0 with hwaddr 00:26:5e:e8:4f:8e and ssid 'myAP' $ sudo brctl addbr br0 device br0 already exists; can't create bridge with the same name $ sudo ifconfig eth0 0.0.0.0 up $ sudo ifconfig wlan0 0.0.0.0 up $ sudo brctl addif br0 eth0 $ sudo brctl addif br0 wlan0 device wlan0 is already a member of a bridge; can't enslave it to bridge br0. $ sudo ifconfig br0 192.168.1.110 netmask 255.255.255.0 $ sudo route add default gw 192.168.1.1

    Read the article

  • How to open a file or folder from Terminal

    - by Victor Hugo Souza
    I'd like to know if there is a way to open a file or a folder from terminal? When I wrote a URL LINK in terminal, it's allows me to open that link on my default browser. So I'd like to do the same with my files and folders. I know that there is a way via cli using gnome-open or xdg-open, but I'd like a solution that use mouse by clicking on the path or the url. Eg. When I write "pwd" the path allows me to click and open on Nautilus It's the inverse of what "nautilus-open-terminal" does.

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >