Search Results

Search found 7629 results on 306 pages for 'tum bin'.

Page 275/306 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • How to Restore the Real Internet Explorer Desktop Icon in Windows 7

    - by The Geek
    Remember how previous versions of Windows had an Internet Explorer icon on the desktop, and you could right-click it to quickly access the Internet Options screen? It’s completely gone in Windows 7, but a geeky hack can bring it back. Microsoft removed this feature to comply with all those murky legal battles they’ve had, and their alternate suggestion is to create a standard shortcut to iexplore.exe on the Desktop, but it’s not the same thing. We’ve got a registry hack to bring it back. This guest article was written by Ramesh from the WinHelpOnline blog, where he’s got loads of really geeky registry hacks. Bring Back the Internet Explorer Namespace Icon in Windows 7 the Easy Way If you just want the IE icon back, all you need to do is download the RealInternetExplorerIcon.zip file, extract the contents, and then double-click on the w7_ie_icon_restore.reg file. That’s all you have to do. There’s also an undo registry file there if you want to get rid of it. Download the Real Internet Explorer Icon Registry Hack Manual Registry Hack If you prefer doing things the manual way, or just really want to understand how this hack works, you can follow through the manual steps below to learn how it was done, but we’ll have to warn you that it’s a lot of steps. Launch Regedit.exe using the Start Menu search box, and then navigate to the following location: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30309D} Right-click on the key on the left-hand pane, choose Export, and save it to a .REG file (say, ie-guid.reg) Open up the REG file using Notepad… From the Edit menu, click Replace, and replace every occurrence of the following GUID string {871C5380-42A0-1069-A2EA-08002B30309D} … with a custom GUID string, such as: {871C5380-42A0-1069-A2EA-08002B30301D} Save the REG file and close Notepad, and then double-click on the file to merge the contents to the registry. Either re-open the registry editor, or use the F5 key to reload everything with the new changes (this step is important). Now you can navigate downto the following registry key: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ Shellex \ ContextMenuHandlers \ ieframe Double-click on the (default) key in the right-hand pane and set its data as: {871C5380-42A0-1069-A2EA-08002B30309D} With this done, press F5 on the desktop and you’ll see the Internet Explorer icon that looks like this: The icon appears incomplete without the Properties command in right click menu, so keep reading. Final Registry Hack Adjustments Click on the following key, which should still be viewable in your Registry editor window from the last step. HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D} Double-click LocalizedString in the right-hand pane and type the following data to rename the icon. Internet Explorer Select the following key: HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D}\shell Add a subkey and name it as Properties, then select the Properties key, double-click the (default) value and type the following: P&roperties Create a String value named Position, and type the following data bottom At this point the window should look something like this: Under Properties, create a subkey and name it as Command, and then set its (default) value as follows: control.exe inetcpl.cpl Navigate down to the following key, and then delete the value named LegacyDisable HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ shell \ OpenHomePage Now head to the this key: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Explorer \ Desktop \ NameSpace Create a subkey named {871C5380-42A0-1069-A2EA-08002B30301D} (which is the custom GUID that we used earlier in this article.) Press F5 to refresh the Desktop, and here is how the Internet Explorer icon would look like, finally. That’s it! It only took 24 steps, but you made it through to the end—of course, you could just download the registry hack and get the icon back with a double-click. Similar Articles Productive Geek Tips Quick Help: Restore Show Desktop Icon in Windows VistaQuick Help: Restore Flip3D Icon in Windows VistaAdd Internet Explorer Icon to Windows XP / Vista DesktopHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • ASP.NET MVC, Web API, Razor and Open Source

    - by ScottGu
    Microsoft has made the source code of ASP.NET MVC available under an open source license since the first V1 release. We’ve also integrated a number of great open source technologies into the product, and now ship jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js and JSON.NET as part of it. I’m very excited to announce today that we will also release the source code for ASP.NET Web API and ASP.NET Web Pages (aka Razor) under an open source license (Apache 2.0), and that we will increase the development transparency of all three projects by hosting their code repositories on CodePlex (using the new Git support announced last week). Doing so will enable a more open development model where everyone in the community will be able to engage and provide feedback on code checkins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code and tests. We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products. We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Very importantly - ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today). They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before). Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products.  We are really excited about the improvements this will bring. Learn More You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor on the http://aspnetwebstack.codeplex.com web-site.  The Git repository on the site is the live RC milestone development tree that the team has been working on the last several weeks, and the tree contains both the runtime sources + tests, and is buildable and testable by anyone.  Because the binaries produced are bin-deployable, this allows you to compile your own builds and try product updates out as soon as they are checked-in. You can also now contribute directly to the development of the products by reviewing and sending feedback on code checkins, submitting bugs and helping us verify fixes as they are checked in, suggesting and giving feedback on new features as they are implemented, as well as by submitting code fixes or code contributions of your own. Note that all code submissions will be rigorously reviewed and tested by the ASP.NET MVC Team, and only those that meet an extremely high bar for both quality and design/roadmap appropriateness will be merged into the source. Summary All of us on the team are really excited about today’s announcement – it has been something we’ve been working toward for many years.  The tighter feedback loop is going to enable us to build even better products, and take ASP.NET to the next level in terms of innovation and customer focus. Thanks, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • GCC 4.2.1 Compiling on Cygwin(Win7 64bit) for iPhone [closed]

    - by Kenneth Noland
    Hey This is going to take a long while to explain, but the short version is that I am currently attempting to compile the LLVM GCC frontend for ARMv7 to compile apps for the Cortex-A8(iPhone 3GS). I'm running into an error from LD when compiling libgcc(part of the gcc compilation process) that has been driving me mad! The command is this: /usr/llvm-gcc-4.2-2.8.source/build/./gcc/xgcc \ -B/usr/llvm-gcc-4.2_2.8.source/build/./gcc \ -B/usr/local/arm-apple-darwin/bin \ -B/usr/local/arm-apple-darwin/lib \ -isystem /usr/local/arm-apple-darwin/include \ -isystem /usr/local/arm-apple-darwin/sys-include \ -O2 -g -W -Wall -Wwrite-strings -wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-inline -dynamiclib -nodefaultlibs -W1,-dead_strip \ -marm \ -install_name /usr/local/arm-apple-darwin/lib/libgcc_s.1.dylib \ -single_module -o ./libgcc_s.1.dylib.tmp \ -W1,-exported_symbols_list,libgcc/./libgcc.map -compatibility_version 1 -current_version 1.0 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc \ ... long list of .o files ... \ -lc And the result is typically a lot of undefined references to malloc, free, exit, etc. which typically indicate that libc is not getting compiled in. After going through the list of errors that ld is throwing, I see at the top that it is attempting to pull in /usr/lib/libc.a and complains that it is not the correct platform. Okay, that makes sense, so I spent 5 minutes on google and found an answer. Turns out that if I copy the libSystem.dylib and rename it to libc.dylib, that should solve the problem, but it doesn't. I couldn't find a copy of that file on my phone, so I pulled it directly from the SDK. I then get this strange error: ld64: in /usr/local/arm-apple-darwin/lib/libc.dylib, can't re-map file, errno=22 At this point, I did everything I could think of. I grabbed a fresh copy of my /usr/lib folder from my iphone and confirmed that libSystem.dylib(and libSystem.B.dylib) wasn't there. I unpacked the raw .ipsw package for iOS 4.2.1 and once again, I could not find a copy of libSystem.dylib there either. I unpacked the iPhoneSDK and MacOS SDK and I managed to find a copy of it in both, but that error just kept persisting. I copied libSystem.dylib, libSystem.B.dylib, tried all sorts of combinations of renaming to libc.dylib and still nothing but errors. I can't find a way to get it to recognize the file and link against it. I also tried linking against the libc.a located in the iphone SDK and that didn't work either. I checked what ./xgcc was firing off, and it was my freshly built copy of arm-apple-darwin-ld64 which should be fine. A little bit of background here. I built LLVM+Clang 2.8 with no errors, and I rebuilt the ODCCTools with some light modifications to get it to compile on Cygwin(I'll post my changes in a patch along with a tutorial if I can get this to work). I also grabbed the iphone-dev "includes" and "csu" project and those completed successfully, although there really is no point to them since I can't get it to link against crt0.a. I'm running out of ideas here. Can anyone help me out on this?

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • Browser History ASP.Net AJAX: Microsoft.Web.Preview

    - by Narendra Tiwari
    I remember in 2006 we were working on a portal for our client Venetian, Las Vegas and the portal is full of AJAX features. One of my friend facing a challange to retain browser history with all AJAX operation. In terms of user experience it is an important aspect which could not be avoided in that scenario. Well that time we have made some workarounds to achieve the same but that may not be the perfect solution. Ok.. Now with Microsoft AJAX there are a lot of such features can be achieved with optimum efficiency. Microsoft AJAX has grown its features over the past few years. Microsoft.Web.Preview.dll is an addon in conjunction with ASP.Net AJAX. It contains a control named "History" for that purpose. Source code:- http://download.microsoft.com/download/8/3/1/831ffcd7-c571-4075-b8fa-6ff678794f60/CS-ASP-ASPBrowserHistoryinAJAX_cs.zip Below is a small sample to demonstrate the control. 1/ Get dll from the above source code bin, and add reference to your web application. 2/ Rightclick on toolbox panel and Choose Item, browse assembly. now you will be able to see History control. 3/ Add below section group in web.config under <configSections> <sectionGroup name="microsoft.web.preview" type="Microsoft.Web.Preview.Configuration.PreviewSectionGroup, Microsoft.Web.Preview"> <section name="search" type="Microsoft.Web.Preview.Configuration.SearchSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="searchSiteMap" type="Microsoft.Web.Preview.Configuration.SearchSiteMapSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="diagnostics" type="Microsoft.Web.Preview.Configuration.DiagnosticsSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> 4/ Now create a simple webpage a textbox (txt1), button (btn1)  in an updatePanel with History control (History1). We will fill in text box and post the fom by clicking button a few times then verify if the browse history is retained. Remember button and textbox must be inside UpdatePanel and History control outside the UpdatePanel. <%@Page Language="C#" AutoEventWireup="true" CodeFile="History.aspx.cs" Inherits="History" %> <%@ Register Assembly="Microsoft.Web.Preview" Namespace="Microsoft.Web.Preview.UI.Controls" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true"></asp:ScriptManager> <div> <cc1:History ID="History1" runat="server" OnNavigate="History1_Navigate"> </cc1:History> <asp:UpdatePanel ID="up1" runat="server"> <ContentTemplate> <asp:TextBox ID="txt1" runat="server"></asp:TextBox><br /> <asp:Button ID="btn1" runat="server" Text="Test" OnClick="btn1_Click" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="History1" /> </Triggers> </asp:UpdatePanel> </div> </form> </body> </html> 5/ Below code to add the textbox value in history everytime we post back using btn1 click.  protected void btn1_Click(object sender, EventArgs e) { History1.AddHistoryPoint("txtState",txt1.Text); } 6/ and finally Navigate event of History control protected void History1_Navigate(object sender, Microsoft.Web.Preview.UI.Controls.HistoryEventArgs args) { string strState = string.Empty; if (args.State.ContainsKey("txtState")) { strState = (string)args.State["txtState"]; } txt1.Text = strState; } Now all set to go :) Reference: http://www.dotnetglobe.com/2008/08/using-asp.html

    Read the article

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Silabs cp2102 driver problem

    - by Zxy
    I downloaded appropriate driver from its own site, unzipped it and then tried to install it. But: root@ghostrider:/home/zero/Downloads# tar xvf cp210x-3.1.0.tar.gz cp210x-3.1.0/ cp210x-3.1.0/COPYING cp210x-3.1.0/cp210x/ cp210x-3.1.0/cp210x-3.1.0.spec cp210x-3.1.0/cp210x/.rpmmacros cp210x-3.1.0/cp210x/configure cp210x-3.1.0/cp210x/cp210x.c cp210x-3.1.0/cp210x/cp210x.h cp210x-3.1.0/cp210x/cp210xuniversal.c cp210x-3.1.0/cp210x/cp210xuniversal.h cp210x-3.1.0/cp210x/installmod cp210x-3.1.0/cp210x/Makefile24 cp210x-3.1.0/cp210x/Makefile26 cp210x-3.1.0/cp210x/rpmmacros24 cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0/cp210x/Rules.make cp210x-3.1.0/INSTALL cp210x-3.1.0/makerpm cp210x-3.1.0/PACKAGE-LIST cp210x-3.1.0/README cp210x-3.1.0/RELEASE-NOTES cp210x-3.1.0/REPORTING-BUGS cp210x-3.1.0/rpm/ cp210x-3.1.0/rpm/brp-java-repack-jars cp210x-3.1.0/rpm/brp-python-bytecompile cp210x-3.1.0/rpm/check-rpaths cp210x-3.1.0/rpm/check-rpaths-worker root@ghostrider:/home/zero/Downloads# cd cp210x-3.1.0 root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# ls COPYING cp210x-3.1.0.spec makerpm README REPORTING-BUGS cp210x INSTALL PACKAGE-LIST RELEASE-NOTES rpm root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# run ./makerpm No command 'run' found, did you mean: Command 'zrun' from package 'moreutils' (universe) Command 'runq' from package 'exim4-daemon-heavy' (main) Command 'runq' from package 'exim4-daemon-light' (main) Command 'runq' from package 'sendmail-bin' (universe) Command 'grun' from package 'grun' (universe) Command 'qrun' from package 'torque-client' (universe) Command 'qrun' from package 'torque-client-x11' (universe) Command 'lrun' from package 'lustre-utils' (universe) Command 'rn' from package 'trn' (multiverse) Command 'rn' from package 'trn4' (multiverse) Command 'rup' from package 'rstat-client' (universe) Command 'srun' from package 'slurm-llnl' (universe) run: command not found root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# sudo ./makerpm + uname -r + kernel_release=3.2.0-25-generic-pae + pwd + current_dir=/home/zero/Downloads/cp210x-3.1.0 + export current_dir + uname -r + KVER=3.2.0-25-generic-pae + echo 3.2.0-25-generic-pae + awk -F . -- { print $1 } + KVER1=3 + echo 3.2.0-25-generic-pae + awk -F . -- { print $2 } + KVER2=2 + sed -e s/3\.2\.//g + echo 3.2.0-25-generic-pae + KVER3=0-25-generic-pae + [ -f /root/.rpmmacros ] + echo 2 2 + [ 2 == 4 ] ./makerpm: 25: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 29: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /root/.rpmmacros + d=/var/tmp/silabs + [ ! -d /var/tmp/silabs ] + mkdir /var/tmp/silabs + cd /var/tmp/silabs + r=/var/tmp/silabs/rpmbuild + o=cp210x-3.1.0 + s=/var/tmp/silabs/rpmbuild/SOURCES + spec=cp210x-3.1.0.spec + rm -rf /var/tmp/silabs/rpmbuild + mkdir rpmbuild + mkdir rpmbuild/SOURCES + mkdir rpmbuild/SRPMS + mkdir rpmbuild/SPECS + mkdir rpmbuild/BUILD + mkdir rpmbuild/RPMS + cd /var/tmp/silabs/rpmbuild/SOURCES + rm -rf cp210x-3.1.0 + mkdir cp210x-3.1.0 + cp -r /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile24 /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile26 /home/zero/Downloads/cp210x- 3.1.0/cp210x/Rules.make /home/zero/Downloads/cp210x-3.1.0/cp210x/configure /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210x.c /home/zero/Downloads/cp210x- 3.1.0/cp210x/cp210x.h /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.c /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.h /home/zero/Downloads/cp210x- 3.1.0/cp210x/installmod /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0 + echo 2 2 + [ 2 == 4 ] ./makerpm: 64: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 68: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24 cp210x-3.1.0/.rpmmacros cp: cannot stat `/home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24': No such file or directory + MyCopy=0 + rm -f cp210x-3.1.0.tar + rm -f cp210x-3.1.0.tar.gz + tar -cf cp210x-3.1.0.tar cp210x-3.1.0 + gzip cp210x-3.1.0.tar + cp /home/zero/Downloads/cp210x-3.1.0/cp210x-3.1.0.spec /var/tmp/silabs/rpmbuild/SPECS + rpmbuild -ba /var/tmp/silabs/rpmbuild/SPECS/cp210x-3.1.0.spec ./makerpm: 121: ./makerpm: rpmbuild: not found + [ -f /root/.rpmmacros.cp210x ] How may I solve my problem? Thanks

    Read the article

  • Can't build gcc anymore since upgrade to 11.10

    - by Raphael R.
    On Monday I've upgraded to from Ubuntu 11.04 (my initial installation) to 11.10 and now I can't build gcc from source anymore. Since I forgot to uninstall the gcc package before the upgrade, Ubuntu replaced my 4.7.0 compiler with it's stable 4.6.1. So I tried to build the SVN sources again, but it fails. I've most recently tried it with SVN revision 180193. After some time, the build fails with the following message: /home/raphael/devel/gcc/build/./gcc/xgcc -B/home/raphael/devel/gcc/build/./gcc/ -B/usr/i686-pc-linux-gnu/bin/ -B/usr/i686-pc-linux-gnu/lib/ -isystem /usr/i686-pc-linux-gnu/include -isystem /usr/i686-pc-linux-gnu/sys-include -g -O2 -O2 -I. -I. -I../../src/gcc -I../../src/gcc/. -I../../src/gcc/../include -I../../src/gcc/../libdecnumber -I../../src/gcc/../libdecnumber/bid -I../libdecnumber -I../../src/gcc/../libgcc -g -O2 -DIN_GCC -W -Wall -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -fbuilding-libgcc -fno-stack-protector -I. -I. -I../.././gcc -I../../../src/libgcc -I../../../src/libgcc/. -I../../../src/libgcc/../gcc -I../../../src/libgcc/../include -I../../../src/libgcc/config/libbid -DENABLE_DECIMAL_BID_FORMAT -DHAVE_CC_TLS -DUSE_TLS -o _ashldi3.o -MT _ashldi3.o -MD -MP -MF _ashldi3.dep -DL_ashldi3 -c ../../../src/libgcc/../gcc/libgcc2.c \ -fvisibility=hidden -DHIDE_EXPORTS In file included from /usr/include/stdio.h:28:0, from ../../../src/libgcc/../gcc/tsystem.h:88, from ../../../src/libgcc/../gcc/libgcc2.c:29: /usr/include/features.h:323:26: fatal error: bits/predefs.h: File or directory not found. I've cofigured it with: ~/devel/gcc/build$ ../src/configure --prefix=/usr --enable-languages=c++ And make it with: ~/devel/gcc/build$ make -j4 Just to be sure, I did a rm -rf * in the build directory in case there's some broken stuff inside. Didn't help, though. That's the backstory. I tried to fix it and searched for the bits/predefs.h. It's inside /usr/include/i386-linux-gnu. I temporarily fixed the problem by doing ~/devel/gcc/build$ C_INCLUDE_PATH=/usr/include/i386-linux-gnu make -j4 Which is only temporary because now gcc complains that it can't find crti.o. Which i can find in /usr/lib/i386-linux-gnu. Now i could also set C_LIBRARY_PATH - actually it doesn't work - but I feel like I'm fighting the system here. Also, even if it succeeds, my newly built compiler would also not know about the i386-linux-gnu stuff. So I would have to set C_LIBRARY_PATH and C_INCLUDE_PATH before every build of every project I have. I could add it to my .bashrc but that subverts the system even more. So, how do I tell the build process: That there are additional include/lib directories, and That it should build a gcc which respects them too? Edit: I forgot to include the command which causes the above error message. Also I can think of another solution: Copy the stuff from /usr/include/i386-linux-gnu to /usr/include (same thing for /usr/lib/i386-linux-gnu to /usr/lib). But that doesn't feel right, either. Finally, the system's gcc 4.6.1 can compile other applications just fine, except mine, which use C++11 features not present in the 4.6 series.

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • Hidden exceptions

    - by user12617285
    Occasionally you may find yourself in a Java application environment where exceptions in your code are being caught by the application framework and either silently swallowed or converted into a generic exception. Either way, the potentially useful details of your original exception are inaccessible. Wouldn't it be nice if there was a VM option that showed the stack trace for every exception thrown, whether or not it's caught? In fact, HotSpot includes such an option: -XX:+TraceExceptions. However, this option is only available in a debug build of HotSpot (search globals.hpp for TraceExceptions). And based on a quick skim of the HotSpot source code, this option only prints the exception class and message. A more useful capability would be to have the complete stack trace printed as well as the code location catching the exception. This is what the various TraceException* options in in Maxine do (and more). That said, there is a way to achieve a limited version of the same thing with a stock standard JVM. It involves the use of the -Xbootclasspath/p non-standard option. The trick is to modify the source of java.lang.Exception by inserting the following: private static final boolean logging = System.getProperty("TraceExceptions") != null; private void log() { if (logging && sun.misc.VM.isBooted()) { printStackTrace(); } } Then every constructor simply needs to be modified to call log() just before returning: public Exception(String message) { super(message); log(); } public Exception(String message, Throwable cause) { super(message, cause); log(); } // etc... You now need to compile the modified Exception.java source and prepend the resulting class to the boot class path as well as add -DTraceExceptions to your java command line. Here's a console session showing these steps: % mkdir boot % javac -d boot Exception.java % java -DTraceExceptions -Xbootclasspath/p:boot -cp com.oracle.max.vm/bin test.output.HelloWorld java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:127) at java.util.jar.JarFile.(JarFile.java:135) at java.util.jar.JarFile.(JarFile.java:72) at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:646) at sun.misc.URLClassPath$JarLoader.access$600(URLClassPath.java:540) at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:607) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) at sun.misc.URLClassPath.getLoader(URLClassPath.java:299) at sun.misc.URLClassPath.getResource(URLClassPath.java:168) at java.net.URLClassLoader$1.run(URLClassLoader.java:194) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at sun.misc.Launcher$ExtClassLoader.findClass(Launcher.java:229) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:295) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) java.security.PrivilegedActionException at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) ... It's worth pointing out that this is not as useful as direct VM support for tracing exceptions. It has (at least) the following limitations: The trace is shown for every exception, whether it is thrown or not. It only applies to subclasses of java.lang.Exception as there appears to be bootstrap issues when the modification is applied to Throwable.java. It does not show you where the exception was caught. It involves overriding a class in rt.jar, something should never be done in a non-development environment.

    Read the article

  • Can't add repos after upgrading to 12.04 LTS

    - by joao
    I'm a complete Linux newbie. I've just upgraded from 10.04 to 12.04 LTS and all sorts of things have started to go wrong. One main problem is the fact that I can't add repos. Example: sudo add-apt-repository ppa:team-xbmc outputs: Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 8, in <module> from softwareproperties.SoftwareProperties import SoftwareProperties File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 53, in <module> from ppa import AddPPASigningKeyThread, expand_ppa_line File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 27, in <module> import pycurl ImportError: librtmp.so.0: cannot open shared object file: No such file or directory /etc/apt/sources.list # deb cdrom:[Ubuntu 10.04.1 LTS _Lucid Lynx_ - Release i386 (20100816.1)]/ lucid main restricted # deb cdrom:[Ubuntu 10.04.1 LTS _Lucid Lynx_ - Release i386 (20100816.1)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb-src http://pt.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu lucid partner # deb-src http://archive.canonical.com/ubuntu lucid partner deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse # deb http://ppa.launchpad.net/stebbins/handbrake-snapshots/ubuntu precise main # disabled on upgrade to precise I have no clue what do do next. Should I just scrap this installation and start from scratch or is this fixable? librtmp.so.0 also shows up in error logs I've started to get from XBMC (I'm not sure if this is relevant info). Thanks in advance for any help you can give me!

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • How do I install Revenge of the Titans?

    - by Akash
    I've downloaded the .deb file of Revenge of the Titans, and installed it using Ubuntu Software Center. Now, when I try to launch it using the software launcher nothing happens. Any ideas? The .deb file was downloaded from the Humble Indie Bundle. I am unable to launch it from the terminal ( the command revenge-of-the-titans says command not found ). I also tried the .tar.gz. When I extract it and run ./revenge.sh , nothing happens. No output on the terminal or anything at all. I have set chmod 777 revenge.sh as well. The command /opt/revengeofthetitans/revenge.sh does not give any output. If I run gedit /opt/revengeofthetitans/revenge.sh in the terminal: > #!/bin/bash > # > # revenge.sh > # > ############################################################################### > > SCRIPT="`basename $0`" > GAMEDIR="${HOME}/.revenge_of_the_titans_1.80" LOGFILE="${GAMEDIR}/${SCRIPT}.log" > INSTDIR="`dirname $0`" ; cd > "${INSTDIR}" ; INSTDIR="`pwd`" > > [[ ! -d "${GAMEDIR}" ]] && mkdir -m > 0755 "${GAMEDIR}" > > JARPATH="patch.jar:RevengeOfTheTitans.jar:data-hib.jar:gfx.jar:fonts.jar:images.jar:music.jar:fx-mono.jar:fx-stereo.jar:gamecommerce.jar:common.jar:spgl-lite.jar:lwjgl.jar:lwjgl_util.jar:jorbis.jar:jinput.jar" > > # XMODIFIERS is cleared here to prevent SCIM screwing up keyboard > input XMODIFIERS= java \ > -noverify \ > -Djava.library.path="${INSTDIR}" \ > -Dorg.lwjgl.util.NoChecks=true \ > -Dorg.lwjgl.librarypath="${INSTDIR}" \ > -Dnet.puppygames.applet.Launcher.resources=/resources-hib.dat > \ > -Dnet.puppygames.applet.Game.gameResource=game.hib > \ > -XX:MaxGCPauseMillis=3 \ > -Xms64m \ > -Xmx375m \ > -Xincgc \ > -cp "${JARPATH}" \ > net.puppygames.applet.Launcher \ > "$@" \ > >"${LOGFILE}" 2>&1 > > exit 0 > > # > # EOF > # > ###############################################################################

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • SharePoint 2010 Hosting :: Setting Default Column Values on a Folder Programmatically

    - by mbridge
    The reason I write this post today is because my initial searches on the Internet provided me with nothing on the topic.  I was hoping to find a reference to the SDK but I didn’t have any luck.  What I want to do is set a default column value on an existing folder so that new items in that folder automatically inherit that value.  It’s actually pretty easy to do once you know what the class is called in the API.  I did some digging and discovered that class is MetadataDefaults. It can be found in Microsoft.Office.DocumentManagement.dll.  Note: if you can’t find it in the GAC, this DLL is in the 14/CONFIG/BIN folder and not the 14/ISAPI folder.  Add a reference to this DLL in your project.  In my case, I am building a console application, but you might put this in an event receiver or workflow. In my example today, I have simple custom folder and document content types.  I have one shared site column called DocumentType.  I have a document library which each of these content types registered.  In my document library, I have a folder named Test and I want to set its default column values using code.  Here is what it looks like.  Start by getting a reference to the list in question.  This assumes you already have a SPWeb object.  In my case I have created it and it is called site. SPList customDocumentLibrary = site.Lists["CustomDocuments"]; You then pass the SPList object to the MetadataDefaults constructor. MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary); Now I just need to get my SPFolder object in question and pass it to the meethod SetFieldDefault.  This takes a SPFolder object, a string with the name of the SPField to set the default on, and finally the value of the default (in my case “Memo”). SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"]; columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo"); You can set multiple defaults here.  When you’re done, you will need to call .Update(). columnDefaults.Update(); Here is what it all looks like together. using (SPSite siteCollection = new SPSite("http://sp2010/sites/ECMSource")) {     using (SPWeb site = siteCollection.OpenWeb())     {         SPList customDocumentLibrary = site.Lists["CustomDocuments"];         MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary);          SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"];         columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo");         columnDefaults.Update();     } } You can verify that your property was set correctly on the Change Default Column Values page in your list This is something that I could see used a lot on an ItemEventReceiver attached to a folder to do metadata inheritance.  Whenever, the user changed the value of the folder’s property, you could have it update the default.  Your code might look something columnDefaults.SetFieldDefault(properties.ListItem.Folder, "MyField", properties.ListItem[" This is a great way to keep the child items updated any time the value a folder’s property changes.  I’m also wondering if this can be done via CAML.  I tried saving a site template, but after importing I got an error on the default values page.  I’ll keep looking and let you know what I find out.

    Read the article

  • Ubuntu 12.04 lightdm dumps to tty. Cannot start GUI interface when booting off harddrive, but can when booting off usb

    - by user72681
    When booting, lightdm dumps to tty. No GUI interface works- this is after a fresh install of Ubuntu 12.04 where the GUI interface works when running off the USB. I have an NVIDIA Corporation G98 [Quadro NVS 420] graphics card. After I call startx from the terminal it still doesn't work. I get the following in the Xorg.0.log: [ 327.718] (--) NVIDIA(0): Memory: 262144 kBytes [ 327.718] (--) NVIDIA(0): VideoBIOS: 62.98.6f.00.07 [ 327.718] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 327.718] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 327.756] (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:3:0:0 [ 327.756] (--) NVIDIA(0): none [ 327.756] (EE) NVIDIA(0): No display devices found for this X screen. [ 328.010] (II) UnloadModule: "nvidia" [ 328.010] (II) Unloading nvidia [ 328.010] (II) UnloadModule: "wfb" [ 328.010] (II) Unloading wfb [ 328.010] (II) UnloadModule: "fb" [ 328.010] (II) Unloading fb [ 328.011] (EE) Screen(s) found, but none have a usable configuration. [ 328.011] Fatal server error: [ 328.011] no screens found /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting local X display [+0.00s] DEBUG: X server :0 will replace Plymouth [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.02s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.02s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.02s] DEBUG: Launching X Server [+0.02s] DEBUG: Launching process 1074: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+0.02s] DEBUG: Waiting for ready signal from X server :0 [+0.02s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.02s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+1.38s] DEBUG: Process 1074 exited with return value 1 [+1.38s] DEBUG: X server stopped [+1.38s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+1.38s] DEBUG: Releasing VT 7 [+1.38s] DEBUG: Stopping Plymouth, X server failed to start [+1.39s] DEBUG: Display server stopped [+1.39s] DEBUG: Stopping display [+1.39s] DEBUG: Display stopped [+1.39s] DEBUG: Stopping X local seat, failed to start a display [+1.39s] DEBUG: Stopping seat [+1.39s] DEBUG: Seat stopped [+1.39s] DEBUG: Required seat has stopped [+1.39s] DEBUG: Stopping display manager [+1.39s] DEBUG: Display manager stopped [+1.39s] DEBUG: Stopping daemon [+1.39s] DEBUG: Exiting with return value 1 /var/log/lightdm/x-0.log X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-31-server x86_64 Ubuntu Current Operating System: Linux oorn 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-23-generic root=UUID=b25ab072-077d-40f1-95a4-c7fd66acd2f0 ro reboot=pci quiet splash vt.handoff=7 Build Date: 07 May 2012 11:43:21PM xorg-server 2:1.11.4-0ubuntu10.2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 27 12:51:45 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) NVIDIA(0): No display devices found for this X screen. (EE) Screen(s) found, but none have a usable configuration. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • Do your filesystems have un-owned files ?

    - by darrenm
    As part of our work for integrated compliance reporting in Solaris we plan to provide a check for determining if the system has "un-owned files", ie those which are owned by a uid that does not exist in our configured nameservice.  Tests such as this already exist in the Solaris CIS Benchmark (9.24 Find Un-owned Files and Directories) and other security benchmarks. The obvious method of doing this would be using find(1) with the -nouser flag.  However that requires we bring into memory the metadata for every single file and directory in every local file system we have mounted.  That is probaby not an acceptable thing to do on a production system that has a large amount of storage and it is potentially going to take a long time. Just as I went to bed last night an idea for a much faster way of listing file systems that have un-owned files came to me. I've now implemented it and I'm happy to report it works very well and peforms many orders of magnatude better than using find(1) ever will.   ZFS (since pool version 15) has per user space accounting and quotas.  We can report very quickly and without actually reading any files at all how much space any given user id is using on a ZFS filesystem.  Using that information we can implement a check to very quickly list which filesystems contain un-owned files. First a few caveats because the output data won't be exactly the same as what you get with find but it answers the same basic question.  This only works for ZFS and it will only tell you which filesystems have files owned by unknown users not the actual files.  If you really want to know what the files are (ie to give them an owner) you still have to run find(1).  However it has the huge advantage that it doesn't use find(1) so it won't be dragging the metadata for every single file and directory on the system into memory. It also has the advantage that it can check filesystems that are not mounted currently (which find(1) can't do). It ran in about 4 seconds on a system with 300 ZFS datasets from 2 pools totalling about 3.2T of allocated space, and that includes the uid lookups and output. #!/bin/sh for fs in $(zfs list -H -o name -t filesystem -r rpool) ; do unknowns="" for uid in $(zfs userspace -Hipn -o name,used $fs | cut -f1); do if [ -z "$(getent passwd $uid)" ]; then unknowns="$unknowns$uid " fi done if [ ! -z "$unknowns" ]; then mountpoint=$(zfs list -H -o mountpoint $fs) mounted=$(zfs list -H -o mounted $fs) echo "ZFS File system $fs mounted ($mounted) on $mountpoint \c" echo "has files owned by unknown user ids: $unknowns"; fi done Sample output: ZFS File system rpool/ROOT/solaris-30/var mounted (no) on /var has files owned by unknown user ids: 6435 33667 101 ZFS File system rpool/ROOT/solaris-32/var mounted (yes) on /var has files owned by unknown user ids: 6435 33667ZFS File system builds/bob mounted (yes) on /builds/bob has files owned by unknown user ids: 101 Note that the above might not actually appear exactly like that in any future Solaris product or feature, it is provided just as an example of what you can do with ZFS user space accounting to answer questions like the above.

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Getting Started with NASM

    - by MarkPearl
    Today I got to play with NASM. This is an assembler and disassembler that can be used to write 16-bit, 32-bit & 64-bit programs. Let me say upfront that the last time I looked at assembly code at any depth was when I was studying Computer Science in Pietermaritzburg – ten years ago – and we never ever got to touch any real assembly code so a lot of what I am looking at today is very new to me. The first thing I did was download NASM compiler. This turned out to be a bit more complicated than I thought. Originally I went to http://www.nasm.us/ and downloaded the nasm-2.09.04.zip file which I thought had all I needed. No luck! It seemed to just have the uncompiled code, and from what I could tell I would need to recompile and build it – possibly in c++? Well, I wasn’t going to waste my time with that, so a bit more searching and I found the Win32 (http://www.nasm.us/pub/nasm/releasebuilds/2.09.04/win32/) folder Nasm.exe which I downloaded. Choosing an IDE So, I have NASM compiler but to compile anything you need to pass a string of special characters in the command prompt. That’s fine if I was going to just do one program once every couple of years, but since I am aiming to do quite a bit more exploration of NASM I began searching for an IDE. There were a few options, even apparently Visual Studio with a bit of tweeking could do the job, but from past experience I wanted to avoid the VS route as it can sometimes get confusing. I eventually settled on TextPad which I had used a few years ago for a similar project and it had been simple enough yet powerful enough to do the job. A bit of searching and I found a syntax file for NASM and everything seemed hunky dory. Configuring TextPad to run the NASM Compiler Next was to get TextPad to run the NASM compiler. TextPad has this external tools option that allows one to configure special commands. To simplify the process I first created a bat file in the NASM directory that allowed me to simply compile asm files. The bat file was called as.bat and had just one line of code… nasm -f bin %1.asm -o %1.com -l %1.lst Once I had created as.bat I just needed to go into TextPad and create a tool. I have made a quick video of that just showing you where the various settings are which is viewable below. The 64Bit Problem So I now have an ‘IDE’ linked to my NASM compiler so everything should be fine right? No! Whenever I tried to compile an asm program it compiles fine, but when I try and run it I get an error – “This version of the file is not compatible with the version Windows you’re running. Check your computer’s system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher." Well.. it turns out there are a few complications with having a 64 bit OS! So after searching google and coming to any real solution that I could find other than perhaps attempting to build the code for nasm, I eventually resorted to running a VM with Windows XP on it and putting NASM there… My first hello world program So I attempt my first hello world program as per an example I found… the code was quite simple and is shown below… bits16 org 0x100 jmp main message: db 'Hello World',0ah,0dh,'$' main: mov dx,message mov ah,09 int 21h int 20h Running the build tool from TextPad and everything compiles fine and I now have a console app with helllo world shown. Conclusion It’s very early days with NASM. I have been spoilt with Visual Studio and high order languages so I assume it will be a painful ride getting into the basics of assembly programming but I am hoping that at the end of it, I will at least have a bit more exposure to a language closer to the metal.

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >