Search Results

Search found 9318 results on 373 pages for 'rescue disk'.

Page 311/373 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Microsoft Azure News: Capturing VM Images

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/05/21/microsoft-azure-news-capturing-vm-images.aspxIf you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive. No more! A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server. And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • How do I set image position in conky

    - by realitygenerator
    I copied and modified an existing .conkyrc file from the ubuntu forum and I'm trying to place the LinuxMint logo in a specific position Below are my conkyrc file and the screenshot # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont URW Gothic:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left #alignment bottom_right. alignment top_middle # Gap between borders of screen and text gap_x 10 gap_y 10 #Display temp in fahrenheit temperature_unit fahrenheit #Choose which screen on which to display # stuff after 'TEXT' will be formatted on screen TEXT $color ${color green}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine LinuxMint 11 "Katya" (Oneric) ${image ~/Conky/Logo_Linux_Mint.png -s 80x60 -f 86400} ${color green}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${hwmon temp 1} $cpubar ${cpugraph 000000 ffffff} NAME PID CPU% MEM% ${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1} ${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2} ${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3} ${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4} ${color green}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Root: ${fs_free_perc /}% ${fs_bar 6 /}$color hda1: ${fs_free_perc /media/sda1}% ${fs_bar 6 /media/sda1}$color ${color green}NETWORK (${addr eth1}) ${hr 2}$color Down: $color${downspeed eth1} k/s ${alignr}Up: ${upspeed eth1} k/s ${downspeedgraph eth1 25,140 000000 ff0000} ${alignr}${upspeedgraph eth1 25,140 000000 00ff00}$color Total: ${totaldown eth1} ${alignr}Total: ${totalup eth1} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color green}LOGGING ${hr 2}$color ${execi 30 tail -n3 /var/log/messages | awk '{print " ",$5,$6,$7,$8,$9,$10}' | fold -w50} ${color green}FORTUNE ${hr 2}$color ${execi 120 fortune -s | fold -w50} I want to put the mint logo right after the word (oneric). Any help would be greatly appreciated.

    Read the article

  • I need help installing Ubuntu 11.10 to multi-drive system

    - by CookyMonzta
    I have a machine with 3 hard drives; the primary, which is 750GB (drive 0), and 2 others, each of which is 640GB (drives 1 and 2). On the last screen before the actual installation begins, this is how my hard drive configuration looks: /dev/sda [DISK0, 750GB] /dev/sda1 ntfs 104MB [Win7 System Reserved] /dev/sda2 ntfs 499,997MB [Windows 7 Pro] free space 250,052MB [This space intended for Windows 8] /dev/sdb [DISK1, 640GB] /dev/sdb1 ntfs 400,085MB [Windows XP Pro] free space 240,049MB [This space intended for Ubuntu] /dev/sdc [DISK2, 640GB] [This drive intended for various backups] free space 160,033MB /dev/sdc5 ntfs 480,101MB [Acronis Secure Zone] As you can see, I have 3 drives, all SATA. I have Win7 on my first drive (0), WinXP on my second drive (1) and a secure zone for daily backups on my third drive (2). I want to put Ubuntu 11.10 Oneiric Ocelot on the drive that also has XP. I've already used 400GB for XP and I have 240GB remaining, for which, my intention was to create a 4GB swap file and use the rest for Ubuntu itself. This is what my second hard drive looked like, for my intended setup before installation: /dev/sdb /dev/sdb5 swap 4,095MB [Linux swap] /dev/sdb6 ext4 235,951MB [Ubuntu 11.10] Needless to say, this is only the second time I have attempted to install Linux. I managed to get 7.10 Gutsy Gibbon working on an old machine. I have two problems with this installation: Ubuntu asks for a location to install the boot loader (i.e., "Device for Boot Loader Installation"). I already have a boot loader; namely, Acronis OS Selector (from Acronis Disk Director 11). So I decided to put the Ubuntu boot loader in /dev/sdb6 (where I intend to install Ubuntu), to keep it from interfering with my Acronis OS Selector. Once I hit "Install now", I ended up with the following error: "No root file system is defined. Please correct this from the partitioning menu." What am I missing? Did I attempt to put the boot loader in the wrong place? I assume I did, because as I am writing this entry, I am looking at LinuxIdentity.com's Ubuntu 11.04 Natty Narwhal magazine, and I see a screenshot (Figure 7 on Page 13) that implies that the boot loader can be installed anywhere, including the first hard drive (in the MBR, which would obviously force me to reinstall the Acronis OS Selector) or even on a floppy. But why do I get an undefined root file system error? I thought /dev/sdb6 was the root file. Obviously I'm missing something in the installation procedure. Should I try installing it in Windows using the WUBI Installer? I assume that, if I attempt to install Ubuntu from WinXP (on the second drive), it will automatically install Ubuntu on the empty partition alongside XP. But will I have the option of creating a swap partition? And what if the WUBI Installer searches all of my drives and decides to install Ubuntu on my first drive's empty partition (which I have left empty for Win8 upon its release)?

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • Update Errors in Xubuntu 12.10

    - by wil
    I updated by computer from 12.04 to 12.10 and after I finished updating when I turned on my computer I am unable to update my computer. I tried install a new copy of 13.04 but my cpu doesn't support pae. I have a IBM Thinkpad T42 with a 1.7 gigahertx Cpu. When updating through terminal This is the output. sudo apt-get upgrade: [sudo] password for wil: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not installed linux-image-generic : Depends: linux-image-3.5.0-34-generic but it is not installed E: Unmet dependencies. Try using -f. sudo apt-get upgrade -f: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following NEW packages will be installed: linux-image-3.5.0-34-generic 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/11.8 MB of archives. After this operation, 25.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 191530 files and directories currently installed.) Unpacking linux-image-3.5.0-34-generic (from .../linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb) ... This kernel does not support a non-PAE CPU. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) wil@wil-ThinkPad-T42:~/Desktop$

    Read the article

  • Ubuntu 12.04 updates fail recently - Please help

    - by user74152
    I upgraded Ubuntu 11.10 to 12.04 LTS immediately after its release (april 2012). Since then updates (new kernels and others) succeeded regularly, but recently, suddenly, updates fail consistently. What causes the problem and how can it be solved? Terminal information after the last update attempt: ariel@ariel-MS-7592:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up linux-image-3.2.0-26-generic (3.2.0-26.41) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic update-initramfs: Generating /boot/initrd.img-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic /usr/sbin/grub-mkconfig: 11: /etc/default/grub: splash”: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-26-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-26-generic (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-26-generic; however: Package linux-image-3.2.0-26-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.26.28); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfiguredNo apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-26-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • AWS EC2 Oracle RDB - Storing and managing my data

    - by llaszews
    When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored: 1. Local drive - This is the local drive that is part of the virtual server EC2 instance. 2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive. 3. Simple Storage Server (S3) - 'Storage for the Internet'. S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance. If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3. Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL. The free micro instances use EBS as its storage. This is a very good white paper that has more details: AWS Storage Options This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database. This slide deck discusses a lot of information that is in the white paper: AWS Storage Options slideshow

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • Exalytics OBI11g Partner Training 3-day hands-on Workshops

    - by Mike.Hallett(at)Oracle-BI&EPM
    These FREE to OPN Partners hands-on workshops highlight both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle's proven Business Intelligence Foundation (OBI 11g v 11.1.1.6 and Essbase) with enhanced visualization capabilities and performance optimizations. Priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam, and to Partners who have purchased an Exalytics for their own data centres to demonstrate it to their clients. Topics covered will include: Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution.You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics. Prerequisites Experience and understanding of OBIEE 11g is required ·       Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11g Introduction Workshop is highly recommended, and priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam. Good understanding of data warehousing and data modelling for reporting and analysis purpose.  Strong experience with database technologies preferred Attendee to provide their own laptops which must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free disk space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop.  Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Register Here for 3-day Workshops: 11-Dec-12 Birmingham UK 29-Jan-13 Utrecht NL 12-Feb-13 Frankfurt Germany 12-Mar-13 Moscow Russia

    Read the article

  • Draw images with warped triangles on a web server [migrated]

    - by epologee
    The scenario The Flash front end of my current project produces images that a web server needs to combine into a video. Both frame-rate and frame-resolution are sizeable enough that sending an image sequence to the back end is not feasible (in both time and client bandwidth). Instead, we're trying to recreate the image drawing on the back end as well. Correct and slow, or incorrect and fast The problem is that this involves quite a bit of drawing textured triangles, and two solutions we found in Python (here and there) are so inefficient, that the drawing takes about 60 seconds per frame, resulting in a whopping 7,5 hours of processing time for a 30 second clip. Unacceptable. When using a PHP-module to send commands to ImageMagick for image manipulation, the whole process is super fast (tenths of a second per frame), but ImageMagick seems to be unable to draw triangles the way we do it in the front end, so the final results do not match. Unacceptable. What I'm asking here, is if there's someone who would know a way to solve this issue, by any means necessary that would run on a web server. Warping an image Let me explain the process of the front end: Perform a Delaunay calculation on points in an image to get an evenly distributed mesh of triangles. Offset the points/vertices in the mesh, distorting or warping the image. Draw the warped triangles on a new bitmap. We can send the results (coordinates) of steps 1 and 2 to the back end, to then draw the warped triangles and save it to an image on disk (or append as a frame to the video). But that last step is what I need help with. The Question Is there an alternative to ImageMagick that can draw triangles in a bitmap? Is there some other library, like a C library, that would allow us to do this? Or could we achieve this effect more easily by switching back end technologies, like Ruby? (.Net and Java are, unfortunately, not really options right now) Many thanks. EP. P.S. I'd appreciate re-tagging efforts, I don't quite know what labels to put on this question. Thanks!

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • How do you turn on the customizable gnome-panel features (like gnome-applets) in Precise?

    - by chriv
    I resurrected a broken laptop today. I took out the HDD, put it in a USB 3.0 enclosure, and created a VM that would use it. It was running lucid. I took a screenshot of the desktop before I started "do-release-upgrade", because from experience, I will never have my GUI back the way I want it again. I know how to install gnome-panel to get back the "Gnome Classic" session option. I know how to put my minimize, maximize, and close buttons back in the upper-right hand corner of windows (where they belong). I know how to use gdm instead of lightdm. Unity gets worse in every version (and the other desktop OS is going to be even worse with Metro). Here's what I don't know (in order of importance): 1. How do you make the panels in gnome (gnome-panel, to be precise) customizable again (like they were in older versions of Ubuntu)? 2. How do you install applets in the panels now (right-click is now ignored)? 3. How can you customize all of the window elements (like you could in older versions of Ubuntu)? I can't remember much about maverick, natty, or oneiric (except their names), so I don't know exactly when I lost these capabilities. Edit: (no screenshot), my StackExchange reputation (on other StackExchange sites) doesn't carry over to this site, so I can't post the screenshot. Take a look at the panels in the screen hot. They are nice, compact, and VERY functional (disk mounter applet, frequently used shortcuts, workspaces, show desktop, kill window, and trash icons, etc.) Notice how small the fonts (and how little real estate they waste). You can't notice the compact title bars, fonts, and window icons in this screen shot (since I redacted the rest of the desktop), but it's the same story there. Please help. I don't want to learn another distro, but Ubuntu gets less customizable with every "upgrade." Screenshot (not an inline image, since I don't have the reputation yet)... i.stack.imgur.com/puoUT.png

    Read the article

  • Weird Ubuntu Desktop Boot Partition On External Hard Drive

    - by Magnitus
    I have a Thinkpad with Windows 7. Last time I installed an Ubuntu/Windows dual boot, Windows was never same after and regularly got corrupted so this time, I installed Ubuntu on a separate external hard drive. I took a 500 GB external hard drive and used Windows to shrink the partition on it to 400 GB, freeing 100 GB to install Ubuntu. Then I modified the booting priority of my computer to boot from the external hard drive if present. Then, I installed Ubuntu desktop on the external hard drive using a DVD, picked the most simplistic partitioning scheme I could get away with (didn't go auto as it didn't include the external hard drive as a choice) and voilà. Fast forward some time and I'm trying to refresh my understanding of Linux partitions to install a bunch of servers, so I'm looking at the current partitioning scheme on my external hard drive and find the boot partition puzzling... sda is my integrated hard drive with Windows 7. sdb is my Ubuntu desktop external hard drive. Running parted on sdb, I get this: (parted) print Model: WD My Passport 0740 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 393GB 393GB primary ntfs boot 2 393GB 500GB 107GB extended 5 393GB 425GB 32.8GB logical linux-swap(v1) 6 425GB 500GB 74.6GB logical ext4 At this point, I'm wondering why the ntfs partition is flagged as "boot" and not my ext4 partition which is the partition that contains / (and by extension, /boot since it's not on its own separate partition). Looking at mtab only confirms what I already know: eric@eric-ThinkPad-W530:~$ sudo cat /etc/mtab /dev/sdb6 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/cgroup tmpfs rw 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0 none /sys/fs/pstore pstore rw 0 0 systemd /sys/fs/cgroup/systemd cgroup rw,noexec,nosuid,nodev,none,name=systemd 0 0 gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,user=eric 0 0 /dev/sdb1 /media/eric/My\040Passport fuseblk rw,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 My lack of understanding concerning this is not vital to anything (this is only my development desktop partition), but somehow annoys me. Any insight that could shed some light on this would be welcome.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

  • Problem with DualBooting Ubuntu 13.10 and Win7

    - by VinArrow
    this is my first post here on AskUbuntu, not first time using it though. I wanted to install Ubuntu 13.10 on my PC to have all my work stuff there and leave Win7 for gaming. So i did my research on how to Dual Boot when you already have Win7 installed, here are the steps i took Used Disk Management on Win7 and shrunk that partition, leaving 80GB free for Ubuntu. Made a Bootable pendrive following the instructions on Ubuntu`s website. During the installation steps there was supposed to be a Install alongside Win7, but there wasnt, so i chose Something else. Everything was fine and i was able to install Ubuntu no problem on my unallocated 80GB partition (76GB Ubuntu + 4GB swap) There was a prompt for me to restart my PC and so I did expecting to see the dual boot screen (grub right?) Now, when i restarted my PC, Grub never showed up and it booted straight to Windows. Then I did some more research and found out that that could happen. Tried three things then Plugging in bootable pendrive again and selected Try Ubuntu without installing. Then i followed some instructions found here (How can I repair grub? (How to get Ubuntu back after installing Windows?)) and i could chroot into my Ubuntu install just fine. Repaired grub as instructed on that link, restarted the PC and booted straight into Win7 again. Again, used the bootable pen drive to Try Ubuntu... and used the Boot-repair tool (recommended repair). Again, booted straight into Win7. Lastly, i installed easyBCD on my Win7 and made a new entry for Ubuntu (Linux/BSD). When i rebooted the PC, there was the option to choose between Win7 and Linux, chose linux and it didnt work, taking me straight to a command line-like enviroment that read Minimum bash like scripting or something, as if I didn`t have a Linux OS installed. So, I thought I`d try and repair my Ubuntu install. And during the Installation method step there was the choice to install alongside Ubuntu 13.10! and that right there drove me crazy. Here is a screenshot of gparted showing how things are set up now http://imageshack.us/f/801/77u3.png/ Notice on the left-hand side how i can access my installation files just fine. sdb1- win7 reserved space, sdb2- win7 OS, sdb3- 76GB ubuntu install, sdb5- 4GB swap area. Does anyone know why my Ubuntu 13.10 is not being recognized? and what should I do to get it working? Thanks and sorry for the long read and bad english! (BIOS = legacy)

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >