Search Results

Search found 30695 results on 1228 pages for 'old software'.

Page 245/1228 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Detected Dependencies - Where is each dependency coming from?

    - by hawbsl
    I have two detected dependencies with the same name (different paths) showing up in my setup project's detected dependencies. One's the "old" one, at an old path. How can i determine which portion of my primary project is causing the "old" one to show up? I can probably do it by trial and error, but is there a quick/direct way?

    Read the article

  • SVN Version Rollback Question

    - by phimuemue
    Hello, I'm using SVN (TortoiseSVN) and often came into the following situation: I wanted to discard any changes since a specific (old) revision and turn all files back to this specific (old) version. Then I wanted to work further as if this specific (old) revision was the newest one, i.e. I wanted to be able to commit the specific old revision as a new revision. I found several solutions for this problem (for example stackoverflow.com/questions/402159/roll-back-or-revert-entire-svn-repository-to-an-older-revision or rustyrazorblade.com/2007/04/how-to-roll-back-commits-to-an-earlier-version-of-a-repository-in-svn/). However, I wonder if there is a simple way to roll back to a specific revision. I thought version control is just good for such things (or am I misunderstanding something?). Is there a simple command/button/etc. that takes an updates my local repository to an old revision and declares it to be the newest one? Since I suppose that there is no "built-in" function to do this, I wanted to know what reason lead the developers to the decision not to integrate this feature. Does anybody know this?

    Read the article

  • Shell script syntax error: unexpected end of line

    - by user1557674
    I wrote a simple shell script to check for the existence of a xml file and if it exists, then rename an old xml file to be backup and then move the new xml file to where the old xml file was stored. #!/bin/sh oldFile="/Documents/sampleFolder/sampleFile.xml" newFile="/Documents/sampleFile.xml" backupFileName="/Documents/sampleFolder/sampleFile2.backup" oldFileLocation="/Documents/sampleFolder" if [ -f "$newFile" ] ; then echo "File found" #Rename old file mv $oldFile $backupFileName #move new file to old file's location mv $newFile $oldFileLocation else echo "File not found, do nothing" fi However, every time I try to run the script, I get 4 command not found messages and a syntax error: unexpected end of file. Any suggestions on why I get these command not found errors or the unexpected end of file? I double checked that I closed all my double quotes, I have code highlight :) EDIT: output from running script: : command not found: : command not found: : command not found1: : command not found6: replaceXML.sh: line 26: syntax error: unexpected end of file

    Read the article

  • Transfer Data between databases with postgres

    - by user227932
    I need to transfer some data from another Database. The old database is called paw1.moviesDB and the new database is paw1. The schema of each table are the following Awards (name of the table)(new DB) Id [PK] Serial Award Nominations (name of the table) (old DB) Id [PK] Serial nominations I want to copy the data from old DB to the new DB.

    Read the article

  • Is the syntax written in descrition is correct for MySQL Triggers? will it work?

    - by Parth
    Is the syntax written in descrition is correct for MySQL Triggers? will it work? CREATE OR REPLACE TRIGGER myTableAuditTrigger 2 AFTER INSERT OR DELETE OR UPDATE ON myTable 3 FOR EACH ROW 4 BEGIN 5 IF INSERTING THEN 6 INSERT INTO myTableAudit (id, Operation, NewName, NewPhone) 7 VALUES (1, 'Insert ', :NEW.Name, :NEW.PhoneNo); 8 ELSIF DELETING THEN 9 INSERT INTO myTableAudit (id, Operation, OldName, OldPhone) 10 VALUES (1, 'Delete ', :OLD.Name, :OLD.PhoneNo); 11 ELSIF UPDATING THEN 12 INSERT INTO myTableAudit (id, Operation, 13 OldName, OldPhone, NewName, NewPhone) 14 VALUES (1, 'Update ', 15 :OLD.Name, :OLD.PhoneNo, :NEW.Name, :NEW.PhoneNo); 16 END IF; 17 END; 18 /

    Read the article

  • Returning string in java using 3 parameters

    - by user2905118
    Need to write a method describePerson() that takes 3 parameters, a String giving a person’s name, a boolean indicating their gender (true for female, false for male), and an integer giving their age. The method should return a String formatted as in the following examples: Lark is female. She is 2 years old. Or Jay is male. He is 1 year old. I am not sure how to write it correctly (my code): int describePerson(String name, boolean gender, int age) { String words=""; if(gender==true) return (name + "is "+gender+". "+"She is"+age+ "years old.); else return (name + "is "+gender+". "+"She is"+age+ "years old.); } The outcome "year" and "years" is also differs, but i don't know how to make it correct..

    Read the article

  • wordpress not properly functioning anymore after moving it to another domain

    - by Andy
    Hi, I followed the instructions on http://codex.wordpress.org/Moving_WordPress under 'Moving WordPress to a New Server' and 'If You Want Your Old Blog To Still Work' So I made a copy of everything marked is as old, then changed the domain under the WP settings, made a new copy. And now put the first copy back but when I go to the login page I can reach it, but it's without the usual markup as first. It's all skinned out. After I login, worpress uses the new domain instead of the old domain which the old copy of wordpress used!! what went wrong?? thanks.

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    - by Asian Angel
    Recently we showed you how to enhance GIMP’s image editing power and today we help you super-charge GIMP even more. G’MIC (GREYC’s Magic Image Converter) will add an impressive array of filters and effects to your GIMP installation for image editing goodness. Note: We applied the Contrast Swiss Mask filter to the image shown in the screenshot above to create a nice, warm sunset effect. To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and do a search for “G’MIC”. You will find two listings available and can select either one to add G’MIC to your system (both work equally well). Click on More Info for the listing that you choose and scroll down to where Add-ons are listed. Make sure to select the Add-on listed, click Apply Changes when it appears, and then click Install. We have both shown here for your convenience… When you get ready to use G’MIC to enhance an image, go to the Filters Menu and select G’MIC. A new window will appear where you can select from an impressive array of filters available for your use. Have fun! Command Line Installation For those of you who prefer using the command line for installation use the following commands: sudo add-apt-repository ppa:ferramroberto/gimp sudo apt-get update sudo apt-get install gmic gimp-gmic Links Note: G’MIC is available for Linux, Windows, and Mac. G’MIC PPA at Launchpad [via Web Upd8] G’MIC Homepage at Sourceforge *Downloads for all three platforms available here. Bonus The anime wallpaper shown in the screenshots above can be found here: anime sport [DesktopNexus] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform]

    - by Asian Angel
    Are you looking for an easy way to create custom sized thumbnail images for use in blog posts, photo albums, and more? Whether is it a single image or a CD full, Simple Image Resizer is the right app to get the job done for you. To add the new PPA for Simple Image Resizer open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Rafael Sachetto on the left (highlighted with red in the image). The listing for Simple Image Resizer will be right at the top…click Install to add the program to your system. After the installation is complete you can find Simple Image Resizer listed as Sir in the Graphics sub-menu. When you open Simple Image Resizer you will need to browse for the directory containing the images you want to work with, select a destination folder, choose a target format and prefix, enter the desired pixel size for converted images, and set the quality level. Convert your image(s) when ready… Note: You will need to determine the image size that best suits your needs before-hand. For our example we chose to convert a single image. A quick check shows our new “thumbnailed” image looking very nice. Simple Image Resizer can convert “into and from” the following image formats: .jpeg, .png, .bmp, .gif, .xpm, .pgm, .pbm, and .ppm Command Line Installation Note: For older Ubuntu systems (9.04 and previous) see the link provided below. sudo add-apt-repository ppa:rsachetto/ppa sudo apt-get update && sudo apt-get install sir Links Note: Simple Image Resizer is available for Ubuntu, Slackware Linux, and Windows. Simple Image Resizer PPA at Launchpad Simple Image Resizer Homepage Command Line Installation for Older Ubuntu Systems Bonus The anime wallpaper shown in the screenshots above can be found here: The end where it begins [DesktopNexus] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • Win7 is not a tablet OS, no matter what the boys in Redmond think.

    - by John Conwell
    Despite what execs at Microsoft think, Windows 7 is NOT a tablet OS.  Just because you can install some software (or OS) on a device, doesn't mean that device is meant to run that software.  This seems to be the step that the non-engineer execs at Microsoft have seem to not understood.  In order to seamlessly work with a device, the software needs to be designed with that device in mind.  That has been the problem with the Windows PDA platform, the Windows Mobil platform, and now with trying to force fit Windows 7 on a tablet.  Its just not designed for that style of interaction.   Windows is designed to be interacted with via a mouse and keyboard.  In fact, it is brilliant at that.  But, It is NOT designed to be interacted with by your fingers.  And that is why the Windows tablet failed 10 years ago, and why it will fail today.  Its not the hardware's fault like Microsoft claimed 10 years ago.  Its the User Interaction design that failed. And this is why the iPhone and Android OS's work wonderfully on a tablet.  The user interaction was designed for small screens, navigated by big fat fingers.  I love these OS's and how I interact with them.  And when I play with a touch screen Windows 7 device, I am feel like I'm playing with a brittle wana-be.  And its not the hardware's fault.  The touchscreen is very responsive.  I actually like the hardware.  But the OS and the software are just not designed to be interacted with, with my big fat fingers.  In order to be successful, Microsoft needs to start from scratch, and build a platform AND SOFTWARE specifically for use by fingers.  Thats why everyone was so excited when they though Microsoft was going to release the Courier tablet.  Because it looked like a totally different platform.  Something that might actually work.  But Windows 7...I hate to burst your bubble, but you are not a touch platform.

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • How to interview a natural scientist for a dev position?

    - by Silas
    I already did some interviews for my company, mostly computer scientists for dev positions but also some testers and project managers. Now I have to fill a vacancy in our research group within the R&D department (side note: “research” means that we try to solve problems in our professional domain/market niche using software in research projects together with universities, other companies, research centres and end user organisations. It’s not computer science research; we’re not going to solve the P=NP problem). Now we invited a guy holding an MSc in chemistry (with a lot of physics in his CV, too), who never had any computer science lesson. I already talked with him about half an hour at a local university’s career days and there’s no doubt the guy is smart. Also his marks are excellent and he graduated with distinction. For his BSc he needed to teach himself programming in Mathematica and told me believably that he liked programming a lot. Also he solved some physical chemistry problem that I probably don’t understand using his own software, implemented in Mathematica, for his MSc thesis. It includes a GUI and a notable size of 8,000 LoC. He seems to be very attracted by what we’re doing in our research group and to be honest it’s quite difficult for an SME like us to get good people. I also am very interested in hiring him since he could assist me in writing project proposals, reports, doing presentations and so on. He would probably fit to our team, too. The only question left is: How can I check if he will get the programming skills he needs to do software implementation in our projects since this will be a significant part of the job? Of course I will ask him what it is, that is fascinating him about programming. I’ll also ask how he proceeded to write his natural science software and how he structured it. I’ll ask about how he managed to obtain the skills and information about software development he needed. But is there something more I could ask? Something more concrete perhaps? Should I ask him to explain his Mathematica solution? To be clear: I’m not looking for knowledge in a particular language or technology stack. We’re a .NET shop in product development but I want to have a free choice for our research projects. So I’m interested in the meta-competence being able to learn whatever is actually needed. I hope this question is answerable and not open-ended since I really like to know if there is a default way to check for the ability to get further programming skills on the job. If something is not clear to you please give me some comments and let me improve my question.

    Read the article

  • career planning advice [closed]

    - by JDB
    Possible Duplicate: Are certifications worth it? I am at the point in my career where people start to veer off into either management-type roles or they focus on solidifying their technical skills to stay in the development game for the long-haul. Here's my story: I've got a degree in economics, an MA in Political Science and an MBA in Finance and Management. In addition, I've done coursework in advanced math and software development (although no degree in math or software). All-in-all, I've got 13 years of post-secondary education under my belt. I, however, currently work as a software developer using C# for desktop, Silverlight, Flex and javascript for web, and objective c for mobile. I've been in software development for the past 3.3 years, and it seems like it comes pretty easy to me. I work in a field called "geospatial information systems," which just involves customization and manipulation of geospatial data. Right now I am looking at one of several certifications. Given this background, which of these certifications has the highest ceiling? CFA PMP various development/technological certifications from Microsoft, etc. Other? My academic and work experience are all heavy on the analytical/development side, esp. so given the MBA and the B.S. in Econ. The political science degree was really a lot of stats. So it seems that I would be good pursuing more of the CFA/analytical role. This is a difficult path, however, because I have no work experience in the financial sector, and the developers in finance are all "quants," which again, I am OK with, but I haven't done much statistical modeling in the past 3.3 years. The PMP would require knowledge of best practices as it pertains explicitly to software development. I also don't enjoy a lot of business travel, a common theme for most PMP jobs I've seen. If certifications is the route, which would you recommend? Anything else? I've thought about going back to try to knock out a B.S. in C.S., but I wasn't sure how long that would take, or what would be involved. Thoughts or recommendations? Thanks in advance! I turn 32 this weekend, which is what has forced me to think about these issues.

    Read the article

  • Problems extracting information from RSS feed description field

    - by Graeme
    Hi, I've built an iPhone application using the parsing code from the TopSongs sample iPhone application. I've hit a problem though - the feed I'm trying to parse data from doesn't have a separate field for every piece of information (i.e. if it was for a feed about dogs, all the information such as dog type, dog age and dog price is contained in the feed. However, the TopSongs app relies on information having its own tags, so instead of using it uses and . So my question is this. How do I extract this information from the description field so that it can be parsed using the TopSongs parser? Can you somehow extract the dog age, price and type information using Yahoo Pipes and use that RSS feed for the feed? Or is there code that I can add to do it in application? Update: To view the code of my application parser (based on the TopSongs Core Data Apple provided application, see below. Here's a sample of one item from the the actual RSS feed I'm using (the description is longer, and has status,size, and a couple of other fields, but they're all formatted the same.: <item> <title>MOE, MARGRET STREET</title> <description> <b>District/Region:</b>&nbsp;REGION 09</br><b>Location:</b>&nbsp;MOE</br><b>Name:</b>&nbsp;MARGRET STREET</br></description> <pubDate>Thu,11 Mar 2010 05:43:03 GMT</pubDate> <guid>1266148</guid> </item> /* File: iTunesRSSImporter.m Abstract: Downloads, parses, and imports the iTunes top songs RSS feed into Core Data. Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2009 Apple Inc. All Rights Reserved. */ #import "iTunesRSSImporter.h" #import "Song.h" #import "Category.h" #import "CategoryCache.h" #import <libxml/tree.h> // Function prototypes for SAX callbacks. This sample implements a minimal subset of SAX callbacks. // Depending on your application's needs, you might want to implement more callbacks. static void startElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes); static void endElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI); static void charactersFoundSAX(void *context, const xmlChar *characters, int length); static void errorEncounteredSAX(void *context, const char *errorMessage, ...); // Forward reference. The structure is defined in full at the end of the file. static xmlSAXHandler simpleSAXHandlerStruct; // Class extension for private properties and methods. @interface iTunesRSSImporter () @property BOOL storingCharacters; @property (nonatomic, retain) NSMutableData *characterBuffer; @property BOOL done; @property BOOL parsingASong; @property NSUInteger countForCurrentBatch; @property (nonatomic, retain) Song *currentSong; @property (nonatomic, retain) NSURLConnection *rssConnection; @property (nonatomic, retain) NSDateFormatter *dateFormatter; // The autorelease pool property is assign because autorelease pools cannot be retained. @property (nonatomic, assign) NSAutoreleasePool *importPool; @end static double lookuptime = 0; @implementation iTunesRSSImporter @synthesize iTunesURL, delegate, persistentStoreCoordinator; @synthesize rssConnection, done, parsingASong, storingCharacters, currentSong, countForCurrentBatch, characterBuffer, dateFormatter, importPool; - (void)dealloc { [iTunesURL release]; [characterBuffer release]; [currentSong release]; [rssConnection release]; [dateFormatter release]; [persistentStoreCoordinator release]; [insertionContext release]; [songEntityDescription release]; [theCache release]; [super dealloc]; } - (void)main { self.importPool = [[NSAutoreleasePool alloc] init]; if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(importerDidSave:) name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } done = NO; self.dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; [dateFormatter setTimeStyle:NSDateFormatterNoStyle]; // necessary because iTunes RSS feed is not localized, so if the device region has been set to other than US // the date formatter must be set to US locale in order to parse the dates [dateFormatter setLocale:[[[NSLocale alloc] initWithLocaleIdentifier:@"US"] autorelease]]; self.characterBuffer = [NSMutableData data]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:iTunesURL]; // create the connection with the request and start loading the data rssConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // This creates a context for "push" parsing in which chunks of data that are not "well balanced" can be passed // to the context for streaming parsing. The handler structure defined above will be used for all the parsing. // The second argument, self, will be passed as user data to each of the SAX handlers. The last three arguments // are left blank to avoid creating a tree in memory. context = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if (rssConnection != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!done); } // Display the total time spent finding a specific object for a relationship NSLog(@"lookup time %f", lookuptime); // Release resources used only in this thread. xmlFreeParserCtxt(context); self.characterBuffer = nil; self.dateFormatter = nil; self.rssConnection = nil; self.currentSong = nil; [theCache release]; theCache = nil; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] removeObserver:delegate name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importerDidFinishParsingData:)]) { [self.delegate importerDidFinishParsingData:self]; } [importPool release]; self.importPool = nil; } - (NSManagedObjectContext *)insertionContext { if (insertionContext == nil) { insertionContext = [[NSManagedObjectContext alloc] init]; [insertionContext setPersistentStoreCoordinator:self.persistentStoreCoordinator]; } return insertionContext; } - (void)forwardError:(NSError *)error { if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importer:didFailWithError:)]) { [self.delegate importer:self didFailWithError:error]; } } - (NSEntityDescription *)songEntityDescription { if (songEntityDescription == nil) { songEntityDescription = [[NSEntityDescription entityForName:@"Song" inManagedObjectContext:self.insertionContext] retain]; } return songEntityDescription; } - (CategoryCache *)theCache { if (theCache == nil) { theCache = [[CategoryCache alloc] init]; theCache.managedObjectContext = self.insertionContext; } return theCache; } - (Song *)currentSong { if (currentSong == nil) { currentSong = [[Song alloc] initWithEntity:self.songEntityDescription insertIntoManagedObjectContext:self.insertionContext]; } return currentSong; } #pragma mark NSURLConnection Delegate methods // Forward errors to the delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [self performSelectorOnMainThread:@selector(forwardError:) withObject:error waitUntilDone:NO]; // Set the condition which ends the run loop. done = YES; } // Called when a chunk of data has been downloaded. - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(context, (const char *)[data bytes], [data length], 0); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // Signal the context that parsing is complete by passing "1" as the last parameter. xmlParseChunk(context, NULL, 0, 1); context = NULL; // Set the condition which ends the run loop. done = YES; } #pragma mark Parsing support methods static const NSUInteger kImportBatchSize = 20; - (void)finishedCurrentSong { parsingASong = NO; self.currentSong = nil; countForCurrentBatch++; // Periodically purge the autorelease pool and save the context. The frequency of this action may need to be tuned according to the // size of the objects being parsed. The goal is to keep the autorelease pool from growing too large, but // taking this action too frequently would be wasteful and reduce performance. if (countForCurrentBatch == kImportBatchSize) { [importPool release]; self.importPool = [[NSAutoreleasePool alloc] init]; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); countForCurrentBatch = 0; } } /* Character data is appended to a buffer until the current element ends. */ - (void)appendCharacters:(const char *)charactersFound length:(NSInteger)length { [characterBuffer appendBytes:charactersFound length:length]; } - (NSString *)currentString { // Create a string with the character data using UTF-8 encoding. UTF-8 is the default XML data encoding. NSString *currentString = [[[NSString alloc] initWithData:characterBuffer encoding:NSUTF8StringEncoding] autorelease]; [characterBuffer setLength:0]; return currentString; } @end #pragma mark SAX Parsing Callbacks // The following constants are the XML element names and their string lengths for parsing comparison. // The lengths include the null terminator, to ensure exact matches. static const char *kName_Item = "item"; static const NSUInteger kLength_Item = 5; static const char *kName_Title = "title"; static const NSUInteger kLength_Title = 6; static const char *kName_Category = "category"; static const NSUInteger kLength_Category = 9; static const char *kName_Itms = "itms"; static const NSUInteger kLength_Itms = 5; static const char *kName_Artist = "description"; static const NSUInteger kLength_Artist = 7; static const char *kName_Album = "description"; static const NSUInteger kLength_Album = 6; static const char *kName_ReleaseDate = "releasedate"; static const NSUInteger kLength_ReleaseDate = 12; /* This callback is invoked when the importer finds the beginning of a node in the XML. For this application, out parsing needs are relatively modest - we need only match the node name. An "item" node is a record of data about a song. In that case we create a new Song object. The other nodes of interest are several of the child nodes of the Song currently being parsed. For those nodes we want to accumulate the character data in a buffer. Some of the child nodes use a namespace prefix. */ static void startElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // The second parameter to strncmp is the name of the element, which we known from the XML schema of the feed. // The third parameter to strncmp is the number of characters in the element name, plus 1 for the null terminator. if (prefix == NULL && !strncmp((const char *)localname, kName_Item, kLength_Item)) { importer.parsingASong = YES; } else if (importer.parsingASong && ( (prefix == NULL && (!strncmp((const char *)localname, kName_Title, kLength_Title) || !strncmp((const char *)localname, kName_Category, kLength_Category))) || ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_Artist, kLength_Artist) || !strncmp((const char *)localname, kName_Album, kLength_Album) || !strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate))) )) { importer.storingCharacters = YES; } } /* This callback is invoked when the parse reaches the end of a node. At that point we finish processing that node, if it is of interest to us. For "item" nodes, that means we have completed parsing a Song object. We pass the song to a method in the superclass which will eventually deliver it to the delegate. For the other nodes we care about, this means we have all the character data. The next step is to create an NSString using the buffer contents and store that with the current Song object. */ static void endElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; if (importer.parsingASong == NO) return; if (prefix == NULL) { if (!strncmp((const char *)localname, kName_Item, kLength_Item)) { [importer finishedCurrentSong]; } else if (!strncmp((const char *)localname, kName_Title, kLength_Title)) { importer.currentSong.title = importer.currentString; } else if (!strncmp((const char *)localname, kName_Category, kLength_Category)) { double before = [NSDate timeIntervalSinceReferenceDate]; Category *category = [importer.theCache categoryWithName:importer.currentString]; double delta = [NSDate timeIntervalSinceReferenceDate] - before; lookuptime += delta; importer.currentSong.category = category; } } else if (!strncmp((const char *)prefix, kName_Itms, kLength_Itms)) { if (!strncmp((const char *)localname, kName_Artist, kLength_Artist)) { NSString *string = importer.currentSong.artist; NSArray *strings = [string componentsSeparatedByString: @", "]; //importer.currentSong.artist = importer.currentString; } else if (!strncmp((const char *)localname, kName_Album, kLength_Album)) { importer.currentSong.album = importer.currentString; } else if (!strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate)) { NSString *dateString = importer.currentString; importer.currentSong.releaseDate = [importer.dateFormatter dateFromString:dateString]; } } importer.storingCharacters = NO; } /* This callback is invoked when the parser encounters character data inside a node. The importer class determines how to use the character data. */ static void charactersFoundSAX(void *parsingContext, const xmlChar *characterArray, int numberOfCharacters) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // A state variable, "storingCharacters", is set when nodes of interest begin and end. // This determines whether character data is handled or ignored. if (importer.storingCharacters == NO) return; [importer appendCharacters:(const char *)characterArray length:numberOfCharacters]; } /* A production application should include robust error handling as part of its parsing implementation. The specifics of how errors are handled depends on the application. */ static void errorEncounteredSAX(void *parsingContext, const char *errorMessage, ...) { // Handle errors as appropriate for your application. NSCAssert(NO, @"Unhandled error encountered during SAX parse."); } // The handler struct has positions for a large number of callback functions. If NULL is supplied at a given position, // that callback functionality won't be used. Refer to libxml documentation at http://www.xmlsoft.org for more information // about the SAX callbacks. static xmlSAXHandler simpleSAXHandlerStruct = { NULL, /* internalSubset */ NULL, /* isStandalone */ NULL, /* hasInternalSubset */ NULL, /* hasExternalSubset */ NULL, /* resolveEntity */ NULL, /* getEntity */ NULL, /* entityDecl */ NULL, /* notationDecl */ NULL, /* attributeDecl */ NULL, /* elementDecl */ NULL, /* unparsedEntityDecl */ NULL, /* setDocumentLocator */ NULL, /* startDocument */ NULL, /* endDocument */ NULL, /* startElement*/ NULL, /* endElement */ NULL, /* reference */ charactersFoundSAX, /* characters */ NULL, /* ignorableWhitespace */ NULL, /* processingInstruction */ NULL, /* comment */ NULL, /* warning */ errorEncounteredSAX, /* error */ NULL, /* fatalError //: unused error() get all the errors */ NULL, /* getParameterEntity */ NULL, /* cdataBlock */ NULL, /* externalSubset */ XML_SAX2_MAGIC, // NULL, startElementSAX, /* startElementNs */ endElementSAX, /* endElementNs */ NULL, /* serror */ }; Thanks.

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

  • Why didn't 12.04 install?

    - by Josephisscrewed
    Ok, so I've installed Ubuntu many times on my computer.. Normally on the same partition, and WIndows would always delete Ubuntu(I don't know how.. it just happens) if i go away from keyboard during boot and it chooses Windows automatically because I took to long. So i tried to reinstall again, but after the fifth time it wouldn't let me, and told me to check "wubi-12.04-rev266.log". It took a while to find, but when i found it, I had no idea what any of it meant, as I'm no programmer.I first tried this the day Precise Pangolin came out. SO skip ahead 2.5 months, when I finally found this file, and i then got the idea of making a new partition to install Ubuntu on, but I used wubi, like I always did. It didn't look like it would f anything up, so I did it. it went through all the downloads, extracting, etc. Which took about 40 minutes total, then ended with an error message saying to check "wubi-12.04-rev266.log". i did. Here's what it says: 07-10 23:33 INFO root: === wubi 12.04 rev266 === 07-10 23:33 DEBUG root: Logfile is c:\users\joseph\appdata\local\temp\wubi-12.04-rev266.log 07-10 23:33 DEBUG root: sys.argv = ['main.pyo', '--exefile="C:\\Users\\Joseph\\Downloads\\wubi.exe"'] 07-10 23:33 DEBUG CommonBackend: data_dir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data 07-10 23:33 DEBUG WindowsBackend: 7z=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\bin\7z.exe 07-10 23:33 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 07-10 23:33 DEBUG CommonBackend: Fetching basic info... 07-10 23:33 DEBUG CommonBackend: original_exe=C:\Users\Joseph\Downloads\wubi.exe 07-10 23:33 DEBUG CommonBackend: platform=win32 07-10 23:33 DEBUG CommonBackend: osname=nt 07-10 23:33 DEBUG CommonBackend: language=en_US 07-10 23:33 DEBUG CommonBackend: encoding=cp1252 07-10 23:33 DEBUG WindowsBackend: arch=amd64 07-10 23:33 DEBUG CommonBackend: Parsing isolist=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data\isolist.ini 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-amd64 07-10 23:33 DEBUG WindowsBackend: Fetching host info... 07-10 23:33 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 07-10 23:33 DEBUG WindowsBackend: windows version=vista 07-10 23:33 DEBUG WindowsBackend: windows_version2=Windows 7 Home Premium 07-10 23:33 DEBUG WindowsBackend: windows_sp=None 07-10 23:33 DEBUG WindowsBackend: windows_build=7600 07-10 23:33 DEBUG WindowsBackend: gmt=-8 07-10 23:33 DEBUG WindowsBackend: country=US 07-10 23:33 DEBUG WindowsBackend: timezone=America/Los_Angeles 07-10 23:33 DEBUG WindowsBackend: windows_username=Joseph 07-10 23:33 DEBUG WindowsBackend: user_full_name=Joseph 07-10 23:33 DEBUG WindowsBackend: user_directory=C:\Users\Joseph 07-10 23:33 DEBUG WindowsBackend: windows_language_code=1033 07-10 23:33 DEBUG WindowsBackend: windows_language=English 07-10 23:33 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 07-10 23:33 DEBUG WindowsBackend: bootloader=vista 07-10 23:33 DEBUG WindowsBackend: system_drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(D: hd 4303.48046875 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free udf) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(U: hd 79907.8320313 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: uninstaller_path=None 07-10 23:33 DEBUG WindowsBackend: previous_target_dir=None 07-10 23:33 DEBUG WindowsBackend: previous_distro_name=None 07-10 23:33 DEBUG WindowsBackend: keyboard_id=67699721 07-10 23:33 DEBUG WindowsBackend: keyboard_layout=us 07-10 23:33 DEBUG WindowsBackend: keyboard_variant= 07-10 23:33 DEBUG CommonBackend: python locale=('en_US', 'cp1252') 07-10 23:33 DEBUG CommonBackend: locale=en_US.UTF-8 07-10 23:33 DEBUG WindowsBackend: total_memory_mb=3893.859375 07-10 23:33 DEBUG CommonBackend: Searching ISOs on USB devices 07-10 23:33 DEBUG CommonBackend: Searching for local CDs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 INFO root: Running the installer... 07-10 23:33 DEBUG WindowsFrontend: __init__... 07-10 23:33 DEBUG WindowsFrontend: on_init... 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG WinuiInstallationPage: target_drive=U:, installation_size=30000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=joseph 07-10 23:35 INFO root: Received settings 07-10 23:35 DEBUG CommonBackend: Searching for local CD 07-10 23:35 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:35 DEBUG CommonBackend: Searching for local ISO 07-10 23:35 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG TaskList: # Running tasklist... 07-10 23:35 DEBUG TaskList: ## Running select_target_dir... 07-10 23:35 INFO WindowsBackend: Installing into U:\ubuntu 07-10 23:35 DEBUG TaskList: ## Finished select_target_dir 07-10 23:35 DEBUG TaskList: ## Running create_dir_structure... 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot\grub 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot\grub 07-10 23:35 DEBUG TaskList: ## Finished create_dir_structure 07-10 23:35 DEBUG TaskList: ## Running create_uninstaller... 07-10 23:35 DEBUG WindowsBackend: Copying uninstaller C:\Users\Joseph\Downloads\wubi.exe -> U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir U:\ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon U:\ubuntu\Ubuntu.ico 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 12.04-rev266 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 07-10 23:35 DEBUG TaskList: ## Finished create_uninstaller 07-10 23:35 DEBUG TaskList: ## Running create_preseed_diskimage... 07-10 23:35 DEBUG TaskList: ## Finished create_preseed_diskimage 07-10 23:35 DEBUG TaskList: ## Running get_diskimage... 07-10 23:35 DEBUG TaskList: New task download 07-10 23:35 DEBUG TaskList: ### Running download... 07-10 23:35 DEBUG downloader: downloading http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz > U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz 07-10 23:35 DEBUG downloader: Download start filename=U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz, url=http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz, basename=ubuntu-12.04-wubi-amd64.tar.xz, length=512730488, text=None 07-11 00:00 DEBUG TaskList: ### Finished download 07-11 00:00 DEBUG downloader: download finished (read 512730488 bytes) 07-11 00:00 DEBUG TaskList: ## Finished get_diskimage 07-11 00:00 DEBUG TaskList: ## Running extract_diskimage... 07-11 00:03 DEBUG TaskList: ## Finished extract_diskimage 07-11 00:03 DEBUG TaskList: ## Running choose_disk_sizes... 07-11 00:03 DEBUG WindowsBackend: total size=30000 root=29744 swap=256 home=0 usr=0 07-11 00:03 DEBUG TaskList: ## Finished choose_disk_sizes 07-11 00:03 DEBUG TaskList: ## Running expand_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished expand_diskimage 07-11 00:05 DEBUG TaskList: ## Running create_swap_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished create_swap_diskimage 07-11 00:05 DEBUG TaskList: ## Running modify_bootloader... 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ### Running modify_bcd... 07-11 00:05 DEBUG WindowsBackend: modify_bcd Drive(C: hd 78696.8203125 mb free ntfs) 07-11 00:05 ERROR TaskList: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: # Cancelling tasklist 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 ERROR root: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ## Finished modify_bootloader 07-11 00:05 DEBUG TaskList: # Finished tasklist What have I done wrong? What can I do? If I turn off my laptop, will I actually be able to turn it back on? If you want me to post the log from the first day it happened, i'd be glad to in the comments, in the main body it made it over 30000 characters.

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • Oracle VM and JRockit Virtual Edition: Oracle Introduces Java Virtualization Solution for Oracle(R)

    - by adam.hawley
    Since the beginning, we've been talking to customers about how our approach to virtualization is different and more powerful than any other company because Oracle has the "full-stack" of software (and even hardware these days!) to work with to create more comprehensive, more powerful solutions. Having the virtualization layer, two enterprise class operating systems in Solaris and Enterprise Linux, and the leading enterprise software in nearly every layer of the data center stack, allows us to not just do virtualization for virtualization's sake but rather to provide complete virtualization solutions focused on making enterprise software easier to deploy, easier to manage, and easier to support through integration up and down the stack. Today, we announced the availability of a significant demonstration of that capability by announcing a WebLogic Suite option that permits the Oracle WebLogic Server 11g to run on a Java JVM (JRockit Virtual Edition) that itself runs directly on the Oracle VM Server for x86 / x64 without needing any operating system. Why would you want that? Better performance and better consolidation density, not to mention great security due to a lower "attack surface area". Oracle also announced the Oracle Virtual Assembly Builder product. Oracle Virtual Assembly Builder provides a framework for automatically capturing the configuration of existing software components and packaging them as self-contained building blocks known as appliances. So you know that complex application you've tweaked on your physical servers (or on other virtual environments for that matter)?  Virtual Assembly Builder will allow the automated collection of all the configuration data for the various application components that make up that multi-tier application and then use the information to create and package each component as a virtual machine so that the application can be deployed in your Oracle VM virtualization environment quickly and easily and just as it was configured it in your original environment. A slick, drag-and-drop GUI also serves as a powerful, intuitive interface for viewing and editing your assembly as needed.No one else can do complete virtualization solutions the way Oracle can and I think these offerings show what's possible when you have the right resources for elegantly solving the larger problems in the data center rather than just having to make-do with tools that are only operating at one layer of the stack. For more information, read the press release including the links to more information on various Oracle websites.

    Read the article

  • "Mac" vs "OS X" vs "Mac OS X"

    - by Brian Campbell
    I am writing server software that gives administrators options that apply only to Mac OS X clients. When naming those options, I need to decide how I am going to refer to such clients. I can see three choices: Mac OS X Mac OS X In context, I feel that "Mac OS X" might be a little clumsy: "Default protocol for Mac OS X clients"; "Change Mac OS X protocol". In discussing with my boss, he suggests "OS X", as that's the name for the OS itself, while I think that "Mac" is more recognizable. While "OS X" is technically correct and is what Apple recommends, I feel that it has a lot less name recognition than "Mac"; in particular, administrators who work in a Windows-only environment may not even recognize "OS X" and wonder what the options is about, while I think everyone knows what "Mac" refers to. In looking at what several popular pieces of software choose, I see that Microsoft has "Office for Mac". Adobe calls it "Macintosh" (which sounds very outdated, I believe that Apple stopped using that "Macintosh" years ago). Firefox uses "Mac OS X". Google has "Google Software Downloads for Mac". I don't see many popular pieces of software that refer to it solely as OS X; it seems that either "Mac OS X" or "Mac" is used most often. Apple does refer to it as OS X, but I think the fact that it's coming from Apple provides the disambiguation that you need, so it's not confusing coming from them, while it may be in another context. Is there any good solution for this? Should I just use the somewhat clumsy "Mac OS X" everywhere (or at least, the first time I refer to it in any given screen)? Obviously, my boss has final say, but I'd like to be able to provide a coherent argument.

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Are programmers bad testers?

    - by jhsowter
    I know this sounds a lot like other questions which have already being asked, but it is actually slightly different. It seems to be generally considered that programmers are not good at performing the role of testing an application. For example: Joel on Software - Top Five (Wrong) Reasons You Don't Have Testers (emphasis mine) Don't even think of trying to tell college CS graduates that they can come work for you, but "everyone has to do a stint in QA for a while before moving on to code". I've seen a lot of this. Programmers do not make good testers, and you'll lose a good programmer, who is a lot harder to replace. And in this question, one of the most popular answers says (again, my emphasis): Developers can be testers, but they shouldn't be testers. Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used. So the question is are programmers bad at testing? What evidence or arguments are there to support this conclusion? Are programmers only bad at testing their own code? Is there any evidence to suggest that programmers are actually good at testing? What do I mean by "testing?" I do not mean unit testing or anything that is considered part of the methodology used by the software team to write software. I mean some kind of quality assurance method that is used after the code has been built and deployed to whatever that software team would call the "test environment."

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >