Search Results

Search found 22871 results on 915 pages for 'automated install'.

Page 309/915 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • PHP arrays. There must be a simpler method to do this

    - by RisingSun
    I have this array in php returned from db Array ( [inv_templates] = Array ( [0] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 1 [inven_template_name] = CopperWires6G [constrained] = 0 [value_constraints] = [accept_range] = 2 - 16 [information] = Measured Manual ) [1] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 2 [inven_template_name] = CopperWires2G [constrained] = 0 [value_constraints] = [accept_range] = 1 - 7 [information] = Measured by Automated Calipers ) ) ) I need to output this kind of multidimensional stuff Array ( [Wires] = Array ( [inv_group_name] = Wires [inv_subgroups] = Array ( [CopperWires] = Array ( [inv_subgroup_id] = 1 [inv_subgroup_name] = CopperWires [inv_templates] = Array ( [CopperWires6G] = Array ( [inv_name] = CopperWires6G [inv_id] = 1 ) [CopperWires2G] = Array ( [inv_name] = CopperWires2G [inv_id] = 2 ) ) ) ) ) ) I currently do this stuff foreach ($data['inv_templates'] as $key = $value) { $processeddata[$value['inven_group']]['inv_group_name'] = $value['inven_group']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_id'] = $value['inven_subgroup_template_id']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_name'] = $value['inven_subgroup']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_name'] = $value['inven_template_name']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_id'] = $value['inven_template_id']; } return $processeddata; EDIT : A var_export array ( 'inv_templates' = array ( 0 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '1', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '2 - 16', 'information' = 'Measured Manual', ), 1 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '2', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '1 - 7', 'information' = 'Measured by Automated Calipers', ), ), ) The foreach is almost unreadable. There must be a simpler way

    Read the article

  • Suggestions for opening the Rails toolbox to design a challenge game?

    - by keruilin
    How would you suggest designing a challenge system as part of a food-eating game so that it's automated as possible? All RoR tools, design patterns and logic are at your disposal (e.g., admin consoles, crontab, arch, etc.). Prize goes to whoever can suggest the simplest and most-automated design! Here are the requirements: User has many challenges. Badge has many challenges. (A unique badge is awarded for each challenge won.) Only one challenge can run at a time. Each challenge has a limited number of days that it runs. For example, one challenge can run 3 days, while another runs 7 days. Challenges can be seasonal. For example, "Eat 13 Pumpkins" only runs during the Fall. New challenges are added to the game on an ongoing basis. For example, a new challenge every week. Each challenge has a certain probability of being selected to run. For example, "Eat 10 Pies" challenge has 10% chance of being selected to run. As each new challenge is added to the database, I want the probabilities of running to change dynamically. I want to avoid the scenario where I'm manually updating a database field just to change the probability from 10% to 5%, for example. Challenges act like Easter eggs. Challenge icons pop-up at different places on the webpage. User is awarded a badge for successfully completing a challenge, but only when it's active. There is some wait time between each challenge. Between 1 and 7 days. Which wait time is random, but the probability of the wait time being short is high and the probability of it being a long wait time is low.

    Read the article

  • Creating a timesheet for work using PHP MySQL

    - by Justin
    I am trying to create a time-sheet for my work. I don't know if getting myself into a lot of work by doing this as I am quiet new to PHP and MYSQL but I do have a good understanding/knowledge of the two. I want the below fields in my database. Job weekPeriod ------A list of weeks Monday Sunday dateWorked ------List Of dates in the form coming from a database e.g. 1/1/2011 startTime ------List of times from 12:00am11:00pm 30 min intervals e.g. 11:30-12:30 endTime ------List of times from 12:00am11:00pm 30 min intervals e.g. 11:30-12:30 totalHours ------Automated amount ------Automated based on dayWorked comments ------Any messages here I want to be able to fill in some drop down boxes through a form that will then submit all information to my database. I want the script to know that if the date worked is on a Weekday Mon-Fri e.g. my rate of pay is 30.00ph On a sat it is 35.00ph and on a Sunday it is 40ph I then want to create a page where i select a particular week and see how many hours i worked and how much i earn and so on. Please let me know if there is such a program already established or if this is something that requires a bit of time and if I could do it being new to PHP and MYSQL

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • Postgresql has broken apt-get on Ubuntu

    - by Raphie Palefsky-Smith
    On ubuntu 12.04, whenever I try to install a package using apt-get I'm greeted by: The following packages have unmet dependencies: postgresql-9.1 : Depends: postgresql-client-9.1 but it is not going to be instal led E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a so lution). apt-get install postgresql-client-9.1 generates: The following packages have unmet dependencies: postgresql-client-9.1 : Breaks: postgresql-9.1 (< 9.1.6-0ubuntu12.04.1) but 9.1.3-2 is to be installed apt-get -f install and apt-get remove postgresql-9.1 both give: Removing postgresql-9.1 ... * Stopping PostgreSQL 9.1 database server * Error: /var/lib/postgresql/9.1/main is not accessible or does not exist ...fail! invoke-rc.d: initscript postgresql, action "stop" failed. dpkg: error processing postgresql-9.1 (--remove): subprocess installed pre-removal script returned error exit status 1 Errors were encountered while processing: postgresql-9.1 E: Sub-process /usr/bin/dpkg returned an error code (1) So, apt-get is crippled, and I can't find a way out. Is there any way to resolve this without a re-install? EDIT: apt-cache show postgresql-9.1 returns: Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.6-0ubuntu12.04.1 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.6-0ubuntu12.04.1_amd64.deb Size: 4298270 MD5sum: 9ee2ab5f25f949121f736ad80d735d57 SHA1: 5eac1cca8d00c4aec4fb55c46fc2a013bc401642 SHA256: 4e6c24c251a01f1b6a340c96d24fdbb92b5e2f8a2f4a8b6b08a0df0fe4cf62ab Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.5-0ubuntu12.04 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.5-0ubuntu12.04_amd64.deb Size: 4298028 MD5sum: 3797b030ca8558a67b58e62cc0a22646 SHA1: ad340a9693341621b82b7f91725fda781781c0fb SHA256: 99aa892971976b85bcf6fb2e1bb8bf3e3fb860190679a225e7ceeb8f33f0e84b Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11220 Maintainer: Martin Pitt <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.3-2 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.3-2) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.3-2) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.3-2_amd64.deb Size: 4284744 MD5sum: bad9aac349051fe86fd1c1f628797122 SHA1: a3f5d6583cc6e2372a077d7c2fc7adfcfa0d504d SHA256: e885c32950f09db7498c90e12c4d1df0525038d6feb2f83e2e50f563fdde404a Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server

    Read the article

  • Python CGI on Amazon AWS EC2 micro-instance -- a how-to!

    - by user595585
    How can you make an EC2 micro instance serve CGI scripts from lighthttpd? For instance Python CGI? Well, it took half a day, but I have gotten Python cgi running on a free Amazon AWS EC2 micro-instance, using the lighttpd server. I think it will help my fellow noobs to put all the steps in one place. Armed with the simple steps below, it will take you only 15 minutes to set things up! My question for the more experienced users reading this is: Are there any security flaws in what I've done? (See file and directory permissions.) Step 1: Start your EC2 instance and ssh into it. [Obviously, you'll need to sign up for Amazon EC2 and save your key pairs to a *.pem file. I won't go over this, as Amazon tells you how to do it.] Sign into your AWS account and start your EC2 instance. The web has tutorials on doing this. Notice that default instance-size that Amazon presents to you is "small." This is not "micro" and so it will cost you money. Be sure to manually choose "micro." (Micro instances are free only for the first year...) Find the public DNS code for your running instance. To do this, click on the instance in the top pane of the dashboard and you'll eventually see the "Public DNS" field populated in the bottom pane. (You may need to fiddle a bit.) The Public DNS looks something like: ec2-174-129-110-23.compute-1.amazonaws.com Start your Unix console program. (On Max OS X, it's called Terminal, and lives in the Applications - Utilities folder.) cd to the directory on your desktop system that has your *.pem file containing your AWS keypairs. ssh to your EC2 instance using a command like: ssh -i <<your *.pem filename>> ec2-user@<< Public DNS address >> So, for me, this was: ssh -i amzn_ec2_keypair.pem [email protected] Your EC2 instance should let you in. Step 2: Download lighttpd to your EC2 instance. To install lighttpd, you will need root access on your EC2 instance. The problem is: Amazon will not let you sign in as root. (Not straightforwardly, at least.) But there is a workaround. Type this command: sudo /bin/bash The system prompt-character will change from $ to #. We won't exit from "sudo" until the very last step in this whole process. Install the lighttpd application (version 1.4.28-1.3.amzn1 for me): yum install lighttpd Install the FastCGI libraries for lighttpd (not needed, but why not?): yum install lighttpd-fastcgi Test that your server is working: /etc/init.d/lighttpd start Step 3: Let the outside world see your server. If you now tried to hit your server from the browser on your desktop, it would fail. The reason: By default, Amazon AWS does not open any ports to your EC2 instance. So, you have to open the ports manually. Go to your EC2 dashboard in your desktop's browser. Click on "Security Groups" in the left pane. One or more security groups will appear in the upper right pane. Choose the one that was assigned to your EC2 instance when you launched your instance. A table called "Allowed Connections" will appear in the lower right pane. A pop-up menu will let you choose "HTTP" as the connection method. The other values in that line of the table should be: tcp, 80, 80, 0.0.0.0/0 Now hit your EC2 instance's server from the desktop in your browser. Use the Public DNS address that you used earlier to SSH in. You should see the lighttpd generic web page. If you don't, I can't help you because I am such a noob. :-( Step 4: Configure lighttpd to serve CGI. Back in the console program, cd to the configuration directory for lighttpd: cd /etc/lighttpd To enable CGI, you want to uncomment one line in the < modules.conf file. (I could have enabled Fast CGI, but baby steps are best!) You can do this with the "ed" editor as follows: ed modules.conf /include "conf.d\/cgi.conf"/ s/#// w q Create the directory where CGI programs will live. (The /etc/lighttpd/lighttpd.conf file determines where this will be.) We'll create our directory in the default location, so we don't have to do any editing of configuration files: cd /var/www/lighttpd mkdir cgi-bin chmod 755 cgi-bin Almost there! Of course you need to put a test CGI program into the cgi-bin directory. Here is one: cd cgi-bin ed a #!/usr/bin/python print "Content-type: text/html\n\n" print "<html><body>Hello, pyworld.</body></html>" . w hellopyworld.py q chmod 655 hellopyworld.py Restart your lighttpd server: /etc/init.d/lighttpd restart Test your CGI program. In your desktop's browser, hit this URL, substituting your EC2 instance's public DNS address: http://<<Public DNS>>/cgi-bin/hellopyworld.py For me, this was: http://ec2-174-129-110-23.compute-1.amazonaws.com/cgi-bin/hellopyworld.py Step 5: That's it! Clean up, and give thanks! To exit from the "sudo /bin/bash" command given earlier, type: exit Acknowledgements: Heaps of thanks to: wiki.vpslink.com/Install_and_Configure_lighttpd www.cyberciti.biz/tips/lighttpd-howto-setup-cgi-bin-access-for-perl-programs.html aws.typepad.com/aws/2010/06/building-three-tier-architectures-with-security-groups.html Good luck, amigos! I apologize for the non-traditional nature of this "question" but I have gotten so much help from Stackoverflow that I was eager to give something back.

    Read the article

  • Problem compiling hive with ant

    - by conandor
    I compiling with Solaris 10 SPARC, jdk 1.6 from Sun, Ant 1.7.1 from OpenCSW. I have no problem running hadoop 0.17.2.1 However, I have problem compiling/integrating hive with the error 'cannot find symbol', although I followed the tutorial. I have the hive source code from SVN exactly from tutorial. How can I know the hive version I compiling and how can I compile against hadoop 0.17.2.1? Please advice. Thank you. -bash-3.00$ export PATH=/usr/jdk/instances/jdk1.6.0/bin:/usr/bin:/opt/csw/bin:/opt/webstack/bin -bash-3.00$ export JAVA_HOME=/usr/jdk/instances/jdk1.6.0 -bash-3.00$ export HADOOP=/export/home/mywork/hadoop-0.17.2.1/bin/hadoop -bash-3.00$ /opt/csw/bin/ant package -Dhadoop.version=0.17.2.1 Buildfile: build.xml jar: create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: compile: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 25878ms :: artifacts dl 37ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/82ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.17.2.1 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.17.2.1) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 12041ms :: artifacts dl 30ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/39ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.18.3 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.18.3) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 11107ms :: artifacts dl 36ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/49ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.19.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.19.0) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 9969ms :: artifacts dl 33ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/57ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.20.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.20.0) jar: [echo] Jar: shims create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: install-hadoopcore: install-hadoopcore-default: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#common;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 4864ms :: artifacts dl 13ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 1 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#common [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 1 already retrieved (0kB/52ms) install-hadoopcore-internal: setup: compile: [echo] Compiling: common jar: [echo] Jar: common create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: dynamic-serde: compile: [echo] Compiling: hive [javac] Compiling 167 source files to /export/home/mywork/hive/build/serde/classes [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:30: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:31: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/MetadataTypedColumnsetSerDe.java:31: cannot find symbol [javac] symbol : class MetadataListStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector [javac] import org.apache.hadoop.hive.serde2.objectinspector.MetadataListStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:33: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:35: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:36: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:39: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:40: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:44: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:46: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:47: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:50: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:51: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java:43: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarSerDe.java:41: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:39: cannot find symbol [javac] symbol: class LazySimpleStructObjectInspector [javac] LazyNonPrimitive<LazySimpleStructObjectInspector> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:68: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyStruct [javac] public LazyStruct(LazySimpleStructObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:36: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:37: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeString.java:23: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypei16.java:23: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeDouble.java:23: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeBool.java:23: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:20: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:37: cannot find symbol [javac] symbol: class LazyBooleanObjectInspector [javac] LazyPrimitive<LazyBooleanObjectInspector, BooleanWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:39: cannot find symbol [javac] symbol : class LazyBooleanObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyBoolean [javac] public LazyBoolean(LazyBooleanObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:21: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:37: cannot find symbol [javac] symbol: class LazyByteObjectInspector [javac] LazyPrimitive<LazyByteObjectInspector, ByteWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:39: cannot find symbol [javac] symbol : class LazyByteObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyByte [javac] public LazyByte(LazyByteObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:23: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:31: cannot find symbol [javac] symbol: class LazyDoubleObjectInspector [javac] LazyPrimitive<LazyDoubleObjectInspector, DoubleWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:33: cannot find symbol [javac] symbol : class LazyDoubleObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyDouble [javac] public LazyDouble(LazyDoubleObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:25: cannot find symbol [javac] symbol : class LazyObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:27: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:28: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:29: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:30: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyFloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:31: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:32: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyLongObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:33: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:34: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:35: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFloat.java:

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

  • Change or Reset Windows Password from a Ubuntu Live CD

    - by Trevor Bekolay
    If you can’t log in even after trying your twelve passwords, or you’ve inherited a computer complete with password-protected profiles, worry not – you don’t have to do a fresh install of Windows. We’ll show you how to change or reset your Windows password from a Ubuntu Live CD. This method works for all of the NT-based version of Windows – anything from Windows 2000 and later, basically. And yes, that includes Windows 7. You’ll need a Ubuntu 9.10 Live CD, or a bootable Ubuntu 9.10 Flash Drive. If you don’t have one, or have forgotten how to boot from the flash drive, check out our article on creating a bootable Ubuntu 9.10 flash drive. The program that lets us manipulate Windows passwords is called chntpw. The steps to install it are different in 32-bit and 64-bit versions of Ubuntu. Installation: 32-bit Open up Synaptic Package Manager by clicking on System at the top of the screen, expanding the Administration section, and clicking on Synaptic Package Manager. chntpw is found in the universe repository. Repositories are a way for Ubuntu to group software together so that users are able to choose if they want to use only completely open source software maintained by Ubuntu developers, or branch out and use software with different licenses and maintainers. To enable software from the universe repository, click on Settings > Repositories in the Synaptic window. Add a checkmark beside the box labeled “Community-maintained Open Source software (universe)” and then click close. When you change the repositories you are selecting software from, you have to reload the list of available software. In the main Synaptic window, click on the Reload button. The software lists will be downloaded. Once downloaded, Synaptic must rebuild its search index. The label over the text field by the Search button will read “Rebuilding search index.” When it reads “Quick search,” type chntpw in the text field. The package will show up in the list. Click on the checkbox near the chntpw name. Click on Mark for Installation. chntpw won’t actually be installed until you apply the changes you’ve made, so click on the Apply button in the Synaptic window now. You will be prompted to accept the changes. Click Apply. The changes should be applied quickly. When they’re done, click Close. chntpw is now installed! You can close Synaptic Package Manager. Skip to the section titled Using chntpw to reset your password. Installation: 64-bit The version of chntpw available in Ubuntu’s universe repository will not work properly on a 64-bit machine. Fortunately, a patched version exists in Debian’s Unstable branch, so let’s download it from there and install it manually. Open Firefox. Whether it’s your preferred browser or not, it’s very readily accessible in the Ubuntu Live CD environment, so it will be the easiest to use. There’s a shortcut to Firefox in the top panel. Navigate to http://packages.debian.org/sid/amd64/chntpw/download and download the latest version of chntpw for 64-bit machines. Note: In most cases it would be best to add the Debian Unstable branch to a package manager, but since the Live CD environment will revert to its original state once you reboot, it’ll be faster to just download the .deb file. Save the .deb file to the default location. You can close Firefox if desired. Open a terminal window by clicking on Applications at the top-left of the screen, expanding the Accessories folder, and clicking on Terminal. In the terminal window, enter the following text, hitting enter after each line: cd Downloadssudo dpkg –i chntpw* chntpw will now be installed. Using chntpw to reset your password Before running chntpw, you will have to mount the hard drive that contains your Windows installation. In most cases, Ubuntu 9.10 makes this simple. Click on Places at the top-left of the screen. If your Windows drive is easily identifiable – usually by its size – then left click on it. If it is not obvious, then click on Computer and check out each hard drive until you find the correct one. The correct hard drive will have the WINDOWS folder in it. When you find it, make a note of the drive’s label that appears in the menu bar of the file browser. If you don’t already have one open, start a terminal window by going to Applications > Accessories > Terminal. In the terminal window, enter the commands cd /medials pressing enter after each line. You should see one or more strings of text appear; one of those strings should correspond with the string that appeared in the title bar of the file browser earlier. Change to that directory by entering the command cd <hard drive label> Since the hard drive label will be very annoying to type in, you can use a shortcut by typing in the first few letters or numbers of the drive label (capitalization matters) and pressing the Tab key. It will automatically complete the rest of the string (if those first few letters or numbers are unique). We want to switch to a certain Windows directory. Enter the command: cd WINDOWS/system32/config/ Again, you can use tab-completion to speed up entering this command. To change or reset the administrator password, enter: sudo chntpw SAM SAM is the file that contains your Windows registry. You will see some text appear, including a list of all of the users on your system. At the bottom of the terminal window, you should see a prompt that begins with “User Edit Menu:” and offers four choices. We recommend that you clear the password to blank (you can always set a new password in Windows once you log in). To do this, enter “1” and then “y” to confirm. If you would like to change the password instead, enter “2”, then your desired password, and finally “y” to confirm. If you would like to reset or change the password of a user other than the administrator, enter: sudo chntpw –u <username> SAM From here, you can follow the same steps as before: enter “1” to reset the password to blank, or “2” to change it to a value you provide. And that’s it! Conclusion chntpw is a very useful utility provided for free by the open source community. It may make you think twice about how secure the Windows login system is, but knowing how to use chntpw can save your tail if your memory fails you two or eight times! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDChange Your Forgotten Windows Password with the Linux System Rescue CDHow to Create and Use a Password Reset Disk in Windows Vista & Windows 7Reset Your Forgotten Password the Easy Way Using the Ultimate Boot CD for WindowsHow to install Spotify in Ubuntu 9.10 using Wine TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • How to debug manifest errors?

    - by Rryk
    I am creating an application that depends on third-party library, which in turn depends on MSVCP90D.dll (it was compiled with debug symbols). While starting the application it fails to start and provides an error message: I have found such library in C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT and C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT. As you can see one of them is 64-bit, while the other is 32-bit. When I have placed 32-bit into the directory of the application the application silently crashes while loading (log from Visual Studio Output window is below). With the 32-bit one I get another error message: If I press Abort -- programs shuts down, Retry results in breaking into debug session for crt0msg.c file. This is system file and I have no idea how to debug it. If I press Ignore I get yet another error message: So the question is how to debug such errors? Please give me some links where I can read more about it or point me out what exactly I should do in such cases. I know this relates to manifest problems -- please give me a good resource where I can read about resources, since what I have found have confused me even more. This is log for 64-bit version of the MSVCP90D.dll library: 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.exe', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ntdll.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\kernel32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\KernelBase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\user32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\gdi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\lpk.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\usp10.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msvcrt.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\advapi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sechost.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\rpcrt4.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sspicli.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cryptbase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shell32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shlwapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\winmm.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\version.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\psapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\opengl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\glu32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ddraw.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dciman32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\setupapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dwmapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\secur32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll', Symbols loaded. 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll' 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\secur32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\opengl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ddraw.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dwmapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dciman32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\glu32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleacc.dll' 'chrome.exe': Unloaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleaut32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ole32.dll' 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). The program '[1152] chrome.exe: Native' has exited with code 9 (0x9).

    Read the article

  • Web Self Service installation on Windows

    - by Rajesh Sharma
    Web Self Service (WSS) installation on windows is pretty straight forward but you might face some issues if deployed under tomcat. Here's a step-by-step guide to install Oracle Utilities Web Self Service on windows.   Below installation steps are done on: Oracle Utilities Framework version 2.2.0 Oracle Utilities Application - Customer Care & Billing version 2.2.0 Application server - Apache Tomcat 6.0.13 on default port 6500 Other settings include: SPLBASE = C:\spl\CCBDEMO22 SPLENVIRON = CCBV22 SPLWAS = TCAT   Follow these steps for a Web Self Service installation on windows: Download Web Self Service application from edelivery.   Copy the delivery file Release-SelfService-V2.2.0.zip from the Oracle Utilities Customer Care and Billing version 2.2.0 Web Self Service folder on the installation media to a directory on your Windows box where you would like to install the application, in our case it's a temporary folder C:\wss_temp.   Setup application environment, execute splenviron.cmd -e <ENVIRON_NAME>   Create base folder for Self Service application named SelfService under %SPLEBASE%\splapp\applications   Install Oracle Utilities Web Self Service   C:\wss_temp\Release-SelfService-V2.2.0>install.cmd -d %SPLEBASE%\splapp\applications\SelfService   Web Self Service installation menu. Populate environment values for each item.   ******************************************************** Pick your installation options: ******************************************************** 1. Destination directory name for installation.             | C:\spl\CCBDEMO22\splapp\applications\SelfService 2. Web Server Host.                                         | CCBV22 3. Web Server Port Number.                                  | 6500 4. Mail SMTP Host.                                          | CCBV22 5. Top Product Installation directory.                      | C:\spl\CCBDEMO22 6.     Web Application Server Type.                         | TCAT 7.     When OAS: SPLWeb OC4J instance name is required.     | OC4J1 8.     When WAS: SPLWeb server instance name is required.   | server1   P. Process the installation. Each item in the above list should be configured for a successful installation. Choose option to configure or (P) to process the installation:  P   Option 7 and Option 8 can be ignored for TCAT.   Above step installs SelfService.war file in the destination directory. We need to explode this war file. Change directory to the installation destination folder, and   C:\spl\CCBDEMO22\splapp\applications\SelfService>jar -xf SelfService.war   Review SelfServiceConfig.properties and CMSelfServiceConfig.properties. Change any properties value within the file specific to your installation/site. Generally default settings apply, for this exercise assumes that WEB user already exists in your application database.   For more information on property file customization, refer to Oracle Utilities Web Self Service Configuration section in Customer Care & Billing Installation Guide.   Add context entry in server.xml located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   ... <!-- SPL Context -->           <Context path="" docBase="C:/spl/CCBDEMO22/splapp/applications/root" debug="0" privileged="true"/>           <Context path="/appViewer" docBase="C:/spl/CCBDEMO22/splapp/applications/appViewer" debug="0" privileged="true"/>           <Context path="/help" docBase="C:/spl/CCBDEMO22/splapp/applications/help" debug="0" privileged="true"/>           <Context path="/XAIApp" docBase="C:/spl/CCBDEMO22/splapp/applications/XAIApp" debug="0" privileged="true"/>           <Context path="/SelfService" docBase="C:/spl/CCBDEMO22/splapp/applications/SelfService" debug="0" privileged="true"/> ...   Add User in tomcat-users.xml file located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   <user username="WEB" password="selfservice" roles="cisusers"/>   Note the password is "selfservice", this is the default password set within the SelfServiceConfig.properties file with base64 encoding.   Restart the application (spl.cmd stop | start)   12.  Although Apache Tomcat version 6.0.13 does not come with the admin pack, you can verify whether SelfService application is loaded and running, go to following URL http://server:port/manager/list, in our case it'll be http://ccbv22:6500/manager/list Following output will be displayed   OK - Listed applications for virtual host localhost /admin:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/ROOT/admin /XAIApp:running:0:C:/spl/CCBDEMO22/splapp/applications/XAIApp /host-manager:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/host-manager /SelfService:running:0:C:/spl/CCBDEMO22/splapp/applications/SelfService /appViewer:running:0:C:/spl/CCBDEMO22/splapp/applications/appViewer /manager:running:1:C:/tomcat/apache-tomcat-6.0.13/webapps/manager /help:running:0:C:/spl/CCBDEMO22/splapp/applications/help /:running:0:C:/spl/CCBDEMO22/splapp/applications/root   Also ensure that the XAIApp is running.   Run Oracle Utilities Web Self Service application http://server:port/SelfService in our case it'll be  http://ccbv22:6500/SelfService   Still doesn't work? And you get '503 HTTP response' at the time of customer registration?     This is because XAI service is still unavailable. There is initialize.waittime set for a default value of 90 seconds for the XAI Application to come up.   Remember WSS uses XAI to perform actions/validations on the CC&B database.  

    Read the article

  • Installing XULrunner, adding mozilla daily ppa doesn't help

    - by Lester Peabody
    I need to install xulrunner so I can use Redcar, but am unable to do so. I followed the steps in this question verbatim Install XULRunner 2.0. Here's the output from adding the Mozilla PPA: lpeabody@ubuntu:/var/www/drupal-7.8$ sudo add-apt-repository ppa:ubuntu-mozilla-daily/ppa You are about to add the following PPA to your system: Official PPA for Firefox and Thunderbird daily builds daily (or even multiple builds per day) for various mozilla projects and branches. For questions and bugs with software in this archive, please contact <email address hidden> or visit #ubuntu-mozillateam on freenode. More info: https://launchpad.net/~ubuntu-mozilla-daily/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.n2JGd679HN --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv B34505EA326FEAEA07E3618DEF4186FE247510BE gpg: requesting key 247510BE from hkp server keyserver.ubuntu.com gpg: key 247510BE: "Launchpad PPA for Ubuntu Mozilla Daily Build Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 lpeabody@ubuntu:/var/www/drupal-7.8$ sudo apt-get update Ign http://us.archive.ubuntu.com oneiric InRelease Ign http://extras.ubuntu.com oneiric InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Ign http://us.archive.ubuntu.com oneiric-backports InRelease Get:1 http://extras.ubuntu.com oneiric Release.gpg [72 B] Hit http://us.archive.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg Hit http://extras.ubuntu.com oneiric/main Sources Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release Hit http://us.archive.ubuntu.com oneiric-backports Release Ign http://extras.ubuntu.com oneiric/main Translation-en_US Hit http://us.archive.ubuntu.com oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric/restricted Sources Hit http://us.archive.ubuntu.com oneiric/universe Sources Ign http://extras.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Sources Hit http://us.archive.ubuntu.com oneiric/main i386 Packages Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/main Sources Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/main Sources Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en Ign http://security.ubuntu.com oneiric-security InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Hit http://security.ubuntu.com oneiric-security Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security Release Get:2 http://ppa.launchpad.net oneiric Release.gpg [316 B] Hit http://ppa.launchpad.net oneiric Release Hit http://security.ubuntu.com oneiric-security/main Sources Get:3 http://ppa.launchpad.net oneiric Release [9,788 B] Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/main Translation-en Get:4 http://ppa.launchpad.net oneiric/main Sources [1,650 B] Get:5 http://ppa.launchpad.net oneiric/main i386 Packages [12.6 kB] Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Fetched 24.5 kB in 5s (4,853 B/s) Reading package lists... Done After running the update, I attempted to install xulrunner, but get the following back: lpeabody@ubuntu:/var/www/drupal-7.8$ sudo apt-get install xulrunner-2.0 xulrunner-2.0-dev xulrunner-2.0-gnome-support xulrunner-dev Reading package lists... Done Building dependency tree Reading state information... Done Package xulrunner-2.0-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source Package xulrunner-2.0 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'xulrunner-2.0' has no installation candidate E: Package 'xulrunner-2.0-dev' has no installation candidate E: Unable to locate package xulrunner-2.0-gnome-support E: Couldn't find any package by regex 'xulrunner-2.0-gnome-support' E: Unable to locate package xulrunner-dev If you need to know anything else please specify, I will be prompt with my answers if asked during the next 2 hours... so 2am EDT.

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Help me solve my problem with NPR Media Player

    - by Calcipher
    First of, let me apologize for this getting a bit technical. Several weeks ago, I found that while using NPR's media player (e.g. click on 'Listen to the Show' - this is what I've been using as a test) the stream would suddenly halt after a minute or three. I could not get the stream to restart without reloading the page. Now, I assumed this was an issue with NPR's player and Linux (or just a bug in their stuff in general) so I began to dig, the following is what I have tried to date (please note, the tldr; option is to skip to the latest thing as I think I know what is causing the problem). Note: All testing has been done, for consistency purposes, on a clean install of Chromium with no pluggins running. My machine is Ubuntu 10.10x64. First thing I always try, I disabled all firewall stuff on the system (UFW, default deny all, allow ssh). No change, firewall back up for all additional tests unless otherwise noted. In any case, UFW is stateful, so connections it started on a non-specified on different ports will continue to work. I deleted my ~/.macromeda and ~/.adobe folders, restarted (just to be sure) and tried. Program still froze. I decided the problem might be with my install of flash, so I purged the version I had (and the home folders again). I installed the x64 version of flash from a PPA. This had no effect. I decided that the problem might be with the version of flash, so I purged the x64 version and installed the standard x32 version that comes with Ubuntu. No luck. Back to the x64 version for consistency, I decided to set up a 64-bit mini 'clone' of my system in VirtualBox. I was able to run the media player with no problem. I rsynced (in archive mode) my home directory from my real machine to the virtual machine (with bridged networking, so it was fully visible on the network). I also used a few tricks to install ALL of the same software (and repositories) from the real machine to the virtual machine. I was still able to listen to the player. I decided that the problem was with my install (after all, it had gone through two major version upgrades). As I have /home/ on a separate partition it was easy to reinstall and use the same trick from #6 to have my system up and running again within about an hour. I continue to have issues with the NPR Media Player. By this point the weekend had come. At work, I use a wired connection while at home I use a wireless connection. For some reason I forgot that I was having problems and used the NPR Media Player over the weekend. Low and behold it worked just fine at home on wireless (note: for various reasons, I could not test this on wired at home). Following from #6, I decided that the problem was either something with the network at work or still something with my account. As the latter was easier to test, I created a new account on my system and used that at work. The Media Player worked. At a loss, I decided to watch the traffic with tshark (the text based brother of wireshark) - X's to protect the innocent, I am the XXX.24.200.XXX: sudo tshark -i eth0 -p -t a -R "ip.addr == XXX.24.200.XXX && ip.addr == XXX.166.98.XXX" As you would expect, there were tons and tons of packets, but each and every time the player froze, this is what I got 08:42:20.679200 XXX.166.98.XXX - XXX.24.200.XXX TCP macromedia-fcs 56371 [PSH, ACK] Seq=817686 Ack=6 Win=65535 Len=1448 TSV=495713325 TSER=396467 08:42:20.718602 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396475 TSER=495713325 08:42:21.050183 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713362 TSER=396475 08:42:21.050221 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396508 TSER=495713362 08:42:21.680548 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713425 TSER=396508 08:42:21.680605 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396571 TSER=495713425 08:42:22.910354 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713548 TSER=396571 08:42:22.910400 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396694 TSER=495713548 08:42:25.340458 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495713791 TSER=396694 08:42:25.340517 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=396937 TSER=495713791 08:42:30.170698 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495714274 TSER=396937 08:42:30.170746 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=397420 TSER=495714274 08:42:39.801738 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495715237 TSER=397420 08:42:39.801784 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=398383 TSER=495715237 08:42:59.032648 XXX.166.98.XXX - XXX.24.200.XXX TCP [TCP ZeroWindowProbe] macromedia-fcs 56371 [ACK] Seq=819134 Ack=6 Win=65535 Len=1 TSV=495717160 TSER=398383 08:42:59.032696 XXX.24.200.XXX - XXX.166.98.XXX TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] 56371 macromedia-fcs [ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=400306 TSER=495717160 08:43:00.267721 XXX.24.200.XXX - XXX.166.98.XXX TCP 56371 macromedia-fcs [FIN, ACK] Seq=6 Ack=819134 Win=0 Len=0 TSV=400430 TSER=495717160 08:43:00.267827 XXX.24.200.XXX - XXX.166.98.XXX TCP 56371 macromedia-fcs [RST, ACK] Seq=7 Ack=819134 Win=65535 Len=0 TSV=400430 TSER=495717160 So, as you can see, my machine is sending out a ZeroWindow packet (which I think means some buffer or another filled up) which causes the Media Player to halt (unfortunately, terminally - no controls on it really do anything anymore). Any ideas, at all, what would cause this? Why only on eth0 under my main account?

    Read the article

  • Installing a DHCP Service On Win2k8 ( Windows Server 2008 )

    - by Akshay Deep Lamba
    Introduction Dynamic Host Configuration Protocol (DHCP) is a core infrastructure service on any network that provides IP addressing and DNS server information to PC clients and any other device. DHCP is used so that you do not have to statically assign IP addresses to every device on your network and manage the issues that static IP addressing can create. More and more, DHCP is being expanded to fit into new network services like the Windows Health Service and Network Access Protection (NAP). However, before you can use it for more advanced services, you need to first install it and configure the basics. Let’s learn how to do that. Installing Windows Server 2008 DHCP Server Installing Windows Server 2008 DCHP Server is easy. DHCP Server is now a “role” of Windows Server 2008 – not a windows component as it was in the past. To do this, you will need a Windows Server 2008 system already installed and configured with a static IP address. You will need to know your network’s IP address range, the range of IP addresses you will want to hand out to your PC clients, your DNS server IP addresses, and your default gateway. Additionally, you will want to have a plan for all subnets involved, what scopes you will want to define, and what exclusions you will want to create. To start the DHCP installation process, you can click Add Roles from the Initial Configuration Tasks window or from Server Manager à Roles à Add Roles. Figure 1: Adding a new Role in Windows Server 2008 When the Add Roles Wizard comes up, you can click Next on that screen. Next, select that you want to add the DHCP Server Role, and click Next. Figure 2: Selecting the DHCP Server Role If you do not have a static IP address assigned on your server, you will get a warning that you should not install DHCP with a dynamic IP address. At this point, you will begin being prompted for IP network information, scope information, and DNS information. If you only want to install DHCP server with no configured scopes or settings, you can just click Next through these questions and proceed with the installation. On the other hand, you can optionally configure your DHCP Server during this part of the installation. In my case, I chose to take this opportunity to configure some basic IP settings and configure my first DHCP Scope. I was shown my network connection binding and asked to verify it, like this: Figure 3: Network connection binding What the wizard is asking is, “what interface do you want to provide DHCP services on?” I took the default and clicked Next. Next, I entered my Parent Domain, Primary DNS Server, and Alternate DNS Server (as you see below) and clicked Next. Figure 4: Entering domain and DNS information I opted NOT to use WINS on my network and I clicked Next. Then, I was promoted to configure a DHCP scope for the new DHCP Server. I have opted to configure an IP address range of 192.168.1.50-100 to cover the 25+ PC Clients on my local network. To do this, I clicked Add to add a new scope. As you see below, I named the Scope WBC-Local, configured the starting and ending IP addresses of 192.168.1.50-192.168.1.100, subnet mask of 255.255.255.0, default gateway of 192.168.1.1, type of subnet (wired), and activated the scope. Figure 5: Adding a new DHCP Scope Back in the Add Scope screen, I clicked Next to add the new scope (once the DHCP Server is installed). I chose to Disable DHCPv6 stateless mode for this server and clicked Next. Then, I confirmed my DHCP Installation Selections (on the screen below) and clicked Install. Figure 6: Confirm Installation Selections After only a few seconds, the DHCP Server was installed and I saw the window, below: Figure 7: Windows Server 2008 DHCP Server Installation succeeded I clicked Close to close the installer window, then moved on to how to manage my new DHCP Server. How to Manage your new Windows Server 2008 DHCP Server Like the installation, managing Windows Server 2008 DHCP Server is also easy. Back in my Windows Server 2008 Server Manager, under Roles, I clicked on the new DHCP Server entry. Figure 8: DHCP Server management in Server Manager While I cannot manage the DHCP Server scopes and clients from here, what I can do is to manage what events, services, and resources are related to the DHCP Server installation. Thus, this is a good place to go to check the status of the DHCP Server and what events have happened around it. However, to really configure the DHCP Server and see what clients have obtained IP addresses, I need to go to the DHCP Server MMC. To do this, I went to Start à Administrative Tools à DHCP Server, like this: Figure 9: Starting the DHCP Server MMC When expanded out, the MMC offers a lot of features. Here is what it looks like: Figure 10: The Windows Server 2008 DHCP Server MMC The DHCP Server MMC offers IPv4 & IPv6 DHCP Server info including all scopes, pools, leases, reservations, scope options, and server options. If I go into the address pool and the scope options, I can see that the configuration we made when we installed the DHCP Server did, indeed, work. The scope IP address range is there, and so are the DNS Server & default gateway. Figure 11: DHCP Server Address Pool Figure 12: DHCP Server Scope Options So how do we know that this really works if we do not test it? The answer is that we do not. Now, let’s test to make sure it works. How do we test our Windows Server 2008 DHCP Server? To test this, I have a Windows Vista PC Client on the same network segment as the Windows Server 2008 DHCP server. To be safe, I have no other devices on this network segment. I did an IPCONFIG /RELEASE then an IPCONFIG /RENEW and verified that I received an IP address from the new DHCP server, as you can see below: Figure 13: Vista client received IP address from new DHCP Server Also, I went to my Windows 2008 Server and verified that the new Vista client was listed as a client on the DHCP server. This did indeed check out, as you can see below: Figure 14: Win 2008 DHCP Server has the Vista client listed under Address Leases With that, I knew that I had a working configuration and we are done!

    Read the article

  • 6 Ways to Free Up Hard Drive Space Used by Windows System Files

    - by Chris Hoffman
    We’ve previously covered the standard ways to free up space on Windows. But if you have a small solid-state drive and really want more hard space, there are geekier ways to reclaim hard drive space. Not all of these tips are recommended — in fact, if you have more than enough hard drive space, following these tips may actually be a bad idea. There’s a tradeoff to changing all of these settings. Erase Windows Update Uninstall Files Windows allows you to uninstall patches you install from Windows Update. This is helpful if an update ever causes a problem — but how often do you need to uninstall an update, anyway? And will you really ever need to uninstall updates you’ve installed several years ago? These uninstall files are probably just wasting space on your hard drive. A recent update released for Windows 7 allows you to erase Windows Update files from the Windows Disk Cleanup tool. Open Disk Cleanup, click Clean up system files, check the Windows Update Cleanup option, and click OK. If you don’t see this option, run Windows Update and install the available updates. Remove the Recovery Partition Windows computers generally come with recovery partitions that allow you to reset your computer back to its factory default state without juggling discs. The recovery partition allows you to reinstall Windows or use the Refresh and Reset your PC features. These partitions take up a lot of space as they need to contain a complete system image. On Microsoft’s Surface Pro, the recovery partition takes up about 8-10 GB. On other computers, it may be even larger as it needs to contain all the bloatware the manufacturer included. Windows 8 makes it easy to copy the recovery partition to removable media and remove it from your hard drive. If you do this, you’ll need to insert the removable media whenever you want to refresh or reset your PC. On older Windows 7 computers, you could delete the recovery partition using a partition manager — but ensure you have recovery media ready if you ever need to install Windows. If you prefer to install Windows from scratch instead of using your manufacturer’s recovery partition, you can just insert a standard Window disc if you ever want to reinstall Windows. Disable the Hibernation File Windows creates a hidden hibernation file at C:\hiberfil.sys. Whenever you hibernate the computer, Windows saves the contents of your RAM to the hibernation file and shuts down the computer. When it boots up again, it reads the contents of the file into memory and restores your computer to the state it was in. As this file needs to contain much of the contents of your RAM, it’s 75% of the size of your installed RAM. If you have 12 GB of memory, that means this file takes about 9 GB of space. On a laptop, you probably don’t want to disable hibernation. However, if you have a desktop with a small solid-state drive, you may want to disable hibernation to recover the space. When you disable hibernation, Windows will delete the hibernation file. You can’t move this file off the system drive, as it needs to be on C:\ so Windows can read it at boot. Note that this file and the paging file are marked as “protected operating system files” and aren’t visible by default. Shrink the Paging File The Windows paging file, also known as the page file, is a file Windows uses if your computer’s available RAM ever fills up. Windows will then “page out” data to disk, ensuring there’s always available memory for applications — even if there isn’t enough physical RAM. The paging file is located at C:\pagefile.sys by default. You can shrink it or disable it if you’re really crunched for space, but we don’t recommend disabling it as that can cause problems if your computer ever needs some paging space. On our computer with 12 GB of RAM, the paging file takes up 12 GB of hard drive space by default. If you have a lot of RAM, you can certainly decrease the size — we’d probably be fine with 2 GB or even less. However, this depends on the programs you use and how much memory they require. The paging file can also be moved to another drive — for example, you could move it from a small SSD to a slower, larger hard drive. It will be slower if Windows ever needs to use the paging file, but it won’t use important SSD space. Configure System Restore Windows seems to use about 10 GB of hard drive space for “System Protection” by default. This space is used for System Restore snapshots, allowing you to restore previous versions of system files if you ever run into a system problem. If you need to free up space, you could reduce the amount of space allocated to system restore or even disable it entirely. Of course, if you disable it entirely, you’ll be unable to use system restore if you ever need it. You’d have to reinstall Windows, perform a Refresh or Reset, or fix any problems manually. Tweak Your Windows Installer Disc Want to really start stripping down Windows, ripping out components that are installed by default? You can do this with a tool designed for modifying Windows installer discs, such as WinReducer for Windows 8 or RT Se7en Lite for Windows 7. These tools allow you to create a customized installation disc, slipstreaming in updates and configuring default options. You can also use them to remove components from the Windows disc, shrinking the size of the resulting Windows installation. This isn’t recommended as you could cause problems with your Windows installation by removing important features. But it’s certainly an option if you want to make Windows as tiny as possible. Most Windows users can benefit from removing Windows Update uninstallation files, so it’s good to see that Microsoft finally gave Windows 7 users the ability to quickly and easily erase these files. However, if you have more than enough hard drive space, you should probably leave well enough alone and let Windows manage the rest of these settings on its own. Image Credit: Yutaka Tsutano on Flickr     

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >