Search Results

Search found 2310 results on 93 pages for 'solaris containers'.

Page 78/93 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Ubuntu 10.04 preseed unattended install results in faulty partition table

    - by joschi
    I'm currently trying to set up an unattended installation of Ubuntu 10.04 (Lucid Lynx) through preseeding. But whenever I try to create a custom partition scheme, the Debian installer (which Ubuntu is using) produces a faulty partition table. I've taken the partition scheme described in the example preseed file: d-i partman-auto/expert_recipe string \ boot-root :: \ 40 50 100 ext3 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ /boot } \ . \ 500 10000 1000000000 ext3 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ / } \ . \ 64 512 300% linux-swap \ method{ swap } format{ } \ . Unfortunately it also produces an incorrect partition table on the disk. The installation process itself is working and the installed system eventually boots and is working, as far as I can tell. But fdisk and cfdisk are still complaining: # fdisk -l /dev/sda Disk /dev/sda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a1cdd Device Boot Start End Blocks Id System /dev/sda1 * 1 5 37888 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 5 2089 16736257 5 Extended /dev/sda5 5 2013 16121856 83 Linux /dev/sda6 2013 2089 613376 82 Linux swap / Solaris cfdisk even refuses to start at all: # cfdisk /dev/sda FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder parted on the other hand does not complain about the cylinder boundary of /dev/sda1: # parted /dev/sda p Model: VMware Virtual disk (scsi) Disk /dev/sda: 17.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 39.8MB 38.8MB primary ext4 boot 2 40.9MB 17.2GB 17.1GB extended 5 40.9MB 16.5GB 16.5GB logical ext4 6 16.6GB 17.2GB 628MB logical linux-swap(v1) Since the installed system is working, it shouldn't be a big problem but I'm afraid that this will mean trouble in the future.

    Read the article

  • Custom GTK widget to bypass GTK layout engine?

    - by fret
    I have an application layer that I'd like to port to Gtk that has all it's own layout code and I don't really want to spend 'n' months re-writting it to work with the Gtk layout system, but rather just using the existing internal layout code and have Gtk render the resulting widgets. I've started by writting my own widget after trying several of the built in containers. Basically I'm looking for something like the GtkFixed container that doesn't have a minimum size, i.e. Gtk will fit the first widget to the entire window, and all the child widgets will lay themselves out so that they fill the area. If I use GtkFixed for that, the window is always limited to the size of the initial layout, as thats the "requested" space. I can't resize it smaller than that using the edges of the window decor. Maybe I need schooling in allocation vs requesting. My googling so far hasn't found the information I need to make this work. I did try. I'm using the C api at the moment, and I'm targetting Win32 and Linux. So far I have a shell app working in Win32 that puts up an empty window. But the first child widget is limiting the resizing to it's initial size.

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • How do I get my custom WPF textbox to fill correctly?

    - by Dan Ryan
    I am trying to create a custom WPF textbox control that extends the standard textbox control but the extended textbox behaves differently when placed in control containers. Within my Window I have a stackpanel with a standard textbox and my extended textbox: <StackPanel Margin="10"> <TextBox Height="21" /> <l:SearchTextBox Search="SearchTextBox_Search" Height="21" Margin="0, 10, 0, 0" SearchMode="Delayed" HorizontalAlignment="Left" /> </StackPanel> The standard textbox stretches the length of the StackPanel whereas the custom textbox does not. How can I get the controls to behave the same? The styling for the custom textbox is shown below: <Style x:Key="{x:Type UIControls:SearchTextBox}" BasedOn="{StaticResource {x:Type TextBox}}" TargetType="{x:Type UIControls:SearchTextBox}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type UIControls:SearchTextBox}"> <TextBox /> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • Dual booting Linux/Win7, Grub refuses to load Win7

    - by JohnB
    Decided to give Linux Mint a try (Ubuntu's interface annoys me), so I installed it with the intention of dual booting with Windows 7. Installation went fine, but now I can only boot into Linux Mint. Grub lists two Windows 7 menu options, but selecting either of them causes an "unknown file system" error and dumps me into a Grub recovery prompt. There, I have to manually reset the root and prefix options, as they reset hd0,msdos6 when they should be hd0,msdos5. I ran Boot Repair twice, once to fix grub errors, once to rebuild the MBR, but it didn't fix anything. Here is the log: http://paste.ubuntu.com/1029675/ fdisk output: Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1486249145 743021149 7 HPFS/NTFS/exFAT /dev/sda3 1486249982 1953523711 233636865 5 Extended /dev/sda5 1486249984 1945141247 229445632 83 Linux /dev/sda6 1945143296 1953523711 4190208 82 Linux swap / Solaris grub.cfg: ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 86184D18184D091F chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 56D84F84D84F60FB chainloader +1 } ### END /etc/grub.d/30_os-prober ### I have found a few similar troubleshooting guides so far, but so far no amount of updating/configuring Grub has been successful. Last resort is, I suppose, use the W7 recovery disc and start over. Thanks in advance! Linux Mint 13 Maya, 64-bit Windows 7 Home Edition, 64-bit

    Read the article

  • IoC - Dynamic Composition of object instances

    - by Joshua Starner
    Is there a way using IoC, MEF [Imports], or another DI solution to compose dependencies on the fly at object creation time instead of during composition time? Here's my current thought. If you have an instance of an object that raises events, but you are not creating the object once and saving it in memory, you have to register the event handlers every time the object is created. As far as I can tell, most IoC containers require you to register all of the classes used in composition and call Compose() to make it hook up all the dependencies. I think this may be horrible design (I'm dealing with a legacy system here) to do this due to the overhead of object creation, dependency injection, etc... but I was wondering if it was possible using one of the emergent IoC technologies. Maybe I have some terminology mixed up, but my goal is to avoid writing a framework to "hook up all the events" on an instance of an object, and use something like MEF to [Export] handlers (dependencies) that adhere to a very specific interface and [ImportMany] them into an object instance so my exports get called if the assemblies are there when the application starts. So maybe all of the objects could still be composed when the application starts, but I want the system to find and call all of them as the object is created and destroyed.

    Read the article

  • Masking FLV video in AS3 with PNG alpha channel.

    - by James Roberts
    Hey there, I'm trying to mask an FLV with a PNG alpha channel. I'm using BitmapData (from a PNG) but it's not working. Is there anything I'm missing? Cut up code below: var musclesLoader:Loader = new Loader(); var musclesContainer:Sprite = new Sprite(); var musclesImage:Bitmap = new Bitmap(); var musclesBitmapData:BitmapData; var musclesVideo:Video = new Video(752, 451.2); var connection:NetConnection = new NetConnection(); var stream:NetStream; function loadMuscles():void { musclesLoader.load(new URLRequest('img/muscles.png')); musclesLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, musclesComplete); } function musclesComplete():void { musclesBitmapData = new BitmapData(musclesLoader.content.width, musclesLoader.content.height, true, 0x000000); musclesImage.bitmapData = musclesBitmapData; musclesImage.smoothing = true; musclesContainer.addChild(musclesImage); contentContainer.addChild(musclesContainer); } function loadMusclesVideo():void { connection.connect(null); stream = new NetStream(connection); stream.client = this; musclesVideo.mask = musclesBitmapData; stage.addChild(musclesVideo); musclesVideo.attachNetStream(stream); stream.bufferTime = 1; stream.receiveAudio(true); stream.receiveVideo(true); stream.play("vid/muscles.flv"); } Outside this code I have a function that adds the containers to the stage, etc and places the objects in the appropriate spots. It sort of works - the mask applies, but in a square (the size of the boundaries of musclesBitmapData) rather than with the shape of the alpha channel. Is this the right way to go about this?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • How to get child container reference in View Model

    - by niels-verkaart
    Hello, I´m trying to share a Data Service (Entity Manager) wrapped in a Repository from a ViewModel (called 'AVM') in Module A to a ViewModel (called 'BVM') in Module B, and I can't get this working. We use PRISM/Unity 2.0 This is my scenario: A user may open multiple Customer screens (composite view as mini shell) each with another customer (unit of work). We realize this using child containers. Each child container resolves it's own repository with its own Entity manager (the repository is a singleton within the child container). This is done in module A. The main shell has a main region manager, and each Customer screen with its childcontainer creates a scoped region. In each customer screen there is a View 'AV' (connected to ViewModel 'AVM') with a SubRegion (tab control) registered as 'SubRegion'. We create this with a 'Screen Factory' In Module B we have a Customer Orders in View 'BV' and ViewModel 'BVM'. In the constructor of Module B we get the main container by injection. In the initialize method we resolve the (main) region manager and register View 'BV' with it. In the constructor of View 'BV' a ViewModel 'BVM' is injected/created. Now this works, but the ViewModel 'BVM' cannot get the child container. It only get the main container. Is this doable, or do I have to do this another way? Thanks, Niels

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • Talk on multiple IRC channels at once?

    - by TwoPixelGrid
    I seem to remember, back in '91 or so, that the console-based IRCII implemention on the Solaris box that first got me on the net would let me /Join multiple channels on a given network such that, as new channels were joined, they would start scrolling to the single console view. Let's call it the 'interleaved conversation' chat paradigm. Am I rembering this correctly? More importantly, is there a modern way of doing this in any of the GUI-based clients? I'm surprised this isn't a common desire/feature because I think it would greatly improve the experience, especially on channels with high SNR. For example, If I'm working on a project I may connect to Freenode and join : #Qt,#OpenGL,#C++. As it is now, with mIRC,Xchat, I have to manually flip between pages just to see whats being said and to reply. What I envision would go more like this (using only 2 channels for simplicity) /join #QT #OpenGL < [QT] QtChannelUser: Hello TwoPixelGrid. < [OpenGL] OpenGLChannelUser: Hi there TwoPixelGrid. @QT: Hi QtChannelUser @OpenGL: Hello againOpenGLChannelUser And this message is going out to all my channels. Do I have to write a new client or is this already out there?

    Read the article

  • What GPT partition type to use for protecting DRBD metadata?

    - by Carsten Scholtes
    I'm planning to install a DRBD device on a (replicated) disk with two GPT partitions. DRBD requires some space for (preferentially "internal") metadata at the end of the underlying device. I'm hesitant to leave this space unpartitionend (or unformatted in a normal partition). I'd like to reserve an extra partition at the end of the underlying disk device for the metadata. (If I understand correctly, DRBD would not care about the partition or its type and could then use that space exclusively.) My question is: Which would be a suitable GPT partition type for such a metadata partition? It should not be a type interpreted while booting (such as EF00 EFI System). It should not be a type prone to be modified accidentialy by the booted OS (such as 8200 Linux swap, 8e00 Linux LVM, fd00 Linux raid). (The booted OS will be Ubuntu Linux 12.04.3.) It should not be a type indicating a normal filesystem (such as 0c01 or 8301), prone to be formatted correspondingly. It should not be a type requiring any special content in the partition (since the content is to be handled exclusively by DRBD). It should express the purpose of being reserved for something special (namely DRBD). (The types I listed are as provided by gdisk. I'm thinking about using some type unlikely to be used by the OS (maybe bf0a Solaris Reserved 4) or an invented(?) type such as fd01 (close to fd00 Linux raid…). Would something like this be suitable, too dangerous or even possible?)

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Positioning / Scrolling problem with Flex popup.

    - by user284163
    Hi all, I'm trying to work out a specific problem I'm having with positioning in Flex using the PopUpManager. Basically I'm wanting to create a popup which will scroll with the parent container - this is necessary because the parent container is large and if the user's browser window isn't large enough (this will be the case the majority of the time) - they will have to use the scrollbar of the container to scroll down. The problem is that the popup is positioned relative to another component, and it needs to stay by that component. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.core.UITextField; import mx.containers.TitleWindow; import mx.managers.PopUpManager; private function clickeroo(event:MouseEvent):void { var popup:TitleWindow = new TitleWindow(); popup.width = 250; popup.height = 300; popup.title = "Example"; var tf:UITextField = new UITextField(); tf.wordWrap = true; tf.width = popup.width - 30; tf.text = "This window stays put and doesn't scroll when the hbox is scrolled (even with using the hbox as parent in the addPopUp method), I need the popup to be local to the HBox."; popup.addChild(tf); PopUpManager.addPopUp(popup, hbox, false); PopUpManager.centerPopUp(popup); } ]]> </mx:Script> <mx:HBox width="100%" height="2000" id="hbox"> <mx:Button label="Click Me" click="clickeroo(event)"/> </mx:HBox> </mx:Application> Could anyone give me any pointers in the right direction? Thanks.

    Read the article

  • Trouble understanding the whole OSGi web eco system

    - by Jens
    Hello, I am pretty new to the whole Java and OSGi world and I have trouble understanding the eco system of a OSGi web application. To be more precise I am at the moment trying to understand how all the parts of the eco system are related to each other: OSGi Framework (e.g. Apache Felix, Equinox, Knoplerfish) OSGi Runtime (e.g. Spring DM Server, Pax Runner, Apache Karaf) Web Extender (e.g. Pax Web Extender, Spring Web Extender) Web Container (e.g. Apache Tomcat, Jetty) To give you a visual representation of my actual understanding of their relationship check out this image: As far as I know the OSGi Framework is a implementation of the OSGi specification. The runtime is a distribution which adds additional functionality on top of the OSGi specification like logging for instance. Since there seem to be some differences in the classpath mechanism of OSGi and web containers like Tomcat you need some kind of translator. This part is handled by the "Web Extender". Would you please clarify this whole thing for me? Am I understanding everything correct?

    Read the article

  • How can I scale an OSMF player in ActionScript 3/Flex

    - by Greg Hinch
    I am trying to create a simple video player SWF using the open source media framework in Flex 4. I want to make it dynamically scale based on the dimensions of the video, input by the user. I am following the directions on the Adobe help site, but the video does not seem to scale properly. Depending on the size, sometimes videos play larger than the space allotted on the webpage, and sometimes smaller. The only way I have been able to get it to work properly is by including a SWF metadata tag hardcoding the width and height, but I can't use that if I want to make the player dynamically sized. My code is : package { import flash.display.Sprite; import flash.events.Event; import org.osmf.media.MediaElement; import org.osmf.media.MediaPlayer; import org.osmf.media.URLResource; import org.osmf.containers.MediaContainer; import org.osmf.elements.VideoElement; import org.osmf.layout.LayoutMetadata; public class GalleryVideoPlayer extends Sprite { private var videoElement:VideoElement; private var mediaPlayer:MediaPlayer; private var mediaContainer:MediaContainer; private var flashVars:Object; public function GalleryVideoPlayer() { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); flashVars = loaderInfo.parameters; mediaPlayer = new MediaPlayer(); videoElement = new VideoElement(new URLResource(flashVars.file)); mediaContainer = new MediaContainer(); var layoutMetadata:LayoutMetadata = new LayoutMetadata(); layoutMetadata.width = Number(flashVars.width); layoutMetadata.height = Number(flashVars.height); videoElement.addMetadata(LayoutMetadata.LAYOUT_NAMESPACE, layoutMetadata); mediaPlayer.media = videoElement; mediaContainer.addMediaElement(videoElement); addChild(mediaContainer); } }}

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • a disk read error occurred

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Can Castle.Windsor do automatic resolution of concrete types

    - by Anthony
    We are evaluating IoC containers for C# projects, and both Unity and Castle.Windsor are standing out. One thing that I like about Unity (NInject and StructureMap also do this) is that types where it is obvious how to construct them do not have to be registered with the IoC Container. Is there way to do this in Castle.Windsor? Am I being fair to Castle.Windsor to say that it does not do this? Is there a design reason to deliberately not do this, or is it an oversight, or just not seen as important or useful? I am aware of container.Register(AllTypes... in Windsor but that's not quite the same thing. It's not entirely automatic, and it's very broad. To illustrate the point, here are two NUnit tests doing the same thing via Unity and Castle.Windsor. The Castle.Windsor one fails. : namespace SimpleIocDemo { using NUnit.Framework; using Castle.Windsor; using Microsoft.Practices.Unity; public interface ISomeService { string DoSomething(); } public class ServiceImplementation : ISomeService { public string DoSomething() { return "Hello"; } } public class RootObject { public ISomeService SomeService { get; private set; } public RootObject(ISomeService service) { SomeService = service; } } [TestFixture] public class IocTests { [Test] public void UnityResolveTest() { UnityContainer container = new UnityContainer(); container.RegisterType<ISomeService, ServiceImplementation>(); // Root object needs no registration in Unity RootObject rootObject = container.Resolve<RootObject>(); Assert.AreEqual("Hello", rootObject.SomeService.DoSomething()); } [Test] public void WindsorResolveTest() { WindsorContainer container = new WindsorContainer(); container.AddComponent<ISomeService, ServiceImplementation>(); // fails with exception "Castle.MicroKernel.ComponentNotFoundException: // No component for supporting the service SimpleIocDemo.RootObject was found" // I could add // container.AddComponent<RootObject>(); // but that approach does not scale RootObject rootObject = container.Resolve<RootObject>(); Assert.AreEqual("Hello", rootObject.SomeService.DoSomething()); } } }

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >