Search Results

Search found 5904 results on 237 pages for 'hybrid storage'.

Page 9/237 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Image storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host images on a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. That would mean that if every user uploaded about 100 images and every image is 1 MB that I will have to need: 500.000 * 100 * 1 MB = 50.000.000 MB which means 50 terabytes. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store my images and eventually store video files as well. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • media storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host media for a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store media files. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • Can I get redundancy with a JBOD storage subsystem

    - by Dat Chu
    I have a Promise Technology J610S. This is a JBOD subsystem. Is it possible for me to buy a SAS hardware RAID controller and provide some type of redundancy for these drives? I am unsure whether I will use Linux or Windows yet so an answer with enumeration for both would be highly appreciated. One solution that I thought of was: if my J610s can export each drive as a target, my server will simply see 16 drives. The RAID controller can then perform the RAID5/RAID6 if I want.

    Read the article

  • What are some solutions for cloud storage [closed]

    - by jaja
    I don't want to move my HDDs much, as it would definitely result in one of them one day dropping and fail... also, they are not portable (there are many), and inconvenient to use with a laptop. So, this is what I merely need:- 1) ability to access files as if the HDDs are directly connected to the computer. Which means I don't have to "transfer" to use (more like stream), and ease of access. 2) low cost.

    Read the article

  • Remotely Managing Storage on Hyper-V 2012 Core

    - by Vazgen
    I have a core Hyper-V Server 2012 that I am remotely managing from a Windows 8 client. I can connect in Hyper-V Manager, Server Manager, and MMC. However, I don't understand how I can manage the physical hard drive (for ex, deleting vhdx files, creating folders, etc) from my Windows 8 client. I tried to attach the remote share as follows: q: \\MyServer\c$ It said command completed successfully, but I don't see the drive on my client's Explorer. I can get to it in cmd.exe on the client but how can I manage it in a GUI? explorer q: Throws error:

    Read the article

  • Creating bootable Fedora USB with persistent storage

    - by dooffas
    I am attempting to burn the full Fedora 19 x86_64 DVD iso to a USB drive and have a separate partition on it for a kickstart file / other media that will be installed in the kickstart process. With the Ubuntu server 12 iso, you can simply dd the iso to the usb drive: dd if=/path/to/iso of=/dev/sdb Once the iso has been burnt, open gparted and create a ext2 parition in the allocated space. However, this does not seem to work with the Fedora ISO. When loading the USB drive in gparted I get a warning and an error: Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. Error: The partition's data region doesn't occupy the entire partition. Ignoring both of these errors allows gparted to load the usb drive, however it shows a blank drive with no partition table. Has anyone come across this before? From what I have found, it may have something to do with the fact that Fedora use isohybrid.

    Read the article

  • What are the best ways to store Graphs in persistent storage

    - by nicoslepicos
    I am wondering what the best ways to store graphs in persistent storage are, for later analysis, search, clustering, etc. I see neo4j being an option, I am curious if there are also other graph databases available. Does anyone have any insights into how larger social networks store their graph based data (or other sites that require the storage of graph like models, e.g. RDF). What about options like Cassandra, or MySQL?

    Read the article

  • Efficient storage in C#.net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • android internal phone storage

    - by John
    how can you retrieve yours phone internal storage from an app? I found memoryinfo, but it seems that returns information on how much memory your currently running tasks. I am trying to get my app to retrieve how much internal phone storage is available.

    Read the article

  • Efficient storage in .Net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • Hybrid USB Install Method - netboot and iso

    - by Samus Arin
    I was following the steps here ("Preparing Files for USB Memory Stick Booting") https://help.ubuntu.com/10.04/installation-guide/i386/boot-usb-files.html to create a installation usb drive for 12.1. The very first paragraph of the article states "The second is to also copy a CD image onto the USB stick and use that as a source for packages, possibly in combination with a mirror." However, the only instructions mentioned regarding an iso image is to simply copy one somewhere on the drive (after its been made bootable and syslinux, vmlinuz and initrd.gz installed/copied): "you should now copy an Ubuntu ISO image onto the stick." I thought it strange there where no configuration steps for "pointing" the kernel to the iso (like a line in syslinux.cfg or a boot: option or something), but went ahead with the install anyway. I don't think the iso was used at all, it appeared that all the OS files where downloaded during the install process. Therefore, I was wondering if anyone knew how to use this local iso image in this particular installation technique (I know the image can be installed with dd, but thats a different technique), b/c I need to reinstall (I installed unity, but it's wayy to much for my little Atom based netbook) ? Thank you.

    Read the article

  • Hybrid wireless network repeating

    - by Oli
    Summary: I'd like to use two Ubuntu computers to extend/compliment an existing wireless access point. I have a network which currently looks a bit like this: What the diagram doesn't show is the interference caused by our house. It's a wifi-blocking robot sent here from the past. The two wired computers are in areas where the signal is most blocked (not by design, just a happy co-incidence). Both wired computers have fairly good network cards. They're both Ubuntu machines and I would like to turn them into additional base stations. I know I could throw more networking hardware at this (network extenders or cable in additional, pure wireless access points) but I've got two Linux machines sitting in ideal places and I feel like they should be able to help me out. I've tried ad-hoc networks but I need something that is a lot more transparent (eg you can migrate from base to base without a connection dropping); it should look like one network to clients.

    Read the article

  • Partition Table and Exadata Hybrid Columnar Compression (EHCC)

    - by Bandari Huang
    Create EHCC table CREATE TABLE ... COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; Convert Table/Partition/Subpartition to EHCC Compress Table&Partition&Subpartition to EHCC: ALTER TABLE table_name MOVE COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARATITION partition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARATITION subpartition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ‘ENABLED’; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED’; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>]; Convert Table/Partition/Subpartition from EHCC to OLTP compression or uncompressed format: Uncompress EHCC Table&Partition&Subpartition: ALTER TABLE table_name MOVE [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARTITION partition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARTITION subpartition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ''; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>];

    Read the article

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • How In-Memory Database Objects Affect Database Design: Hybrid Code

    - by drsql
    In my first attempts at building my code, I strictly went with either native or on-disk code. I specifically wrote the on-disk code to only use features that worked in-memory. This lead to one majorly silly bit of code, used to create system assigned key values. How would I create a customer number that was unique. We can’t use the Max(value) + 1 approach because it will be very hideous with MVCC isolation levels, since 100 connections might see the same value, leading to lots of duplication. You...(read more)

    Read the article

  • Extremely Hybrid Game requirements

    - by tugrul büyükisik
    What system specifications would a game need if it was: Total players per planet: ~20000 Total players per team:~1M Total players per map(small volume of space or small surface over a planet): ~2000 Total players: ~10M(world has more players than this amount i think) Two of the players are commanders of opposite quadrants(from HUD of a strategy game). Lots of players use space-crafts as a captain(like 3d fps and rts). Many many players control consoles in those space-crafts as under command of captains.(fps ) Some players are still in stone-age trying to reinvent wheel in some planet. Players design and construct any vehicles they have. With good physics engine Has puzzles inside. Everyone get experience by doing stuff(RPG). Commerce, income or totally different resource-based group(like starcraft) Player classes(primitive: cunning and strong, wrapped: healthy, wealthy) Arcade top-down style firing with ships when people get bored very low chance of miraculous things.(mediclorians, wormholes, bugs) Different game-modes: persistent(living world), resetted periodically(a new chance for noobs), instant(pre-built space + hack&slash) I suspect this would need 128GB ram and 2048 cores.

    Read the article

  • Optimization of a Hybrid Pagination Scheme

    - by Kaustubh Karkare
    I'm working on a Web Application using node.js in which I'm building a partial copy of the database on the client-side to decrease the load on my server. Right now, I have a function like this (expressed as python-style pseudocode, but implemented in JavaScript): get(table_name,primary_key): if primary_key in cache[table_name]: return cache[table_name][primary_key] else: x = get_data_from_server(table_name,primary_key) # socket.io return cache[table_name][primary_key] = x While this scheme works perfectly well for caching individual rows, I'd like to extend it to support the creation of paginated tables ordered according to the primary_key, and loading additional data using the above function for only the current and possibly the adjacent pages. Now, I don't want to keep the list of primary keys on the server to be retrieved every time I need to change the page (which, for reasons beyond the scope here, will be very frequent), and keeping it on the client side, subject to real-time create/delete events from the server, doesn't seem that good an idea, even after compression (using ranges, instead of individual values). What is the best way to calculate which items are to be displayed on a random page, minimizing the space requirements & the need for communication with the server?

    Read the article

  • Ubuntu 13.10, kernel 3.11 blank screen issue with hybrid graphics

    - by Lagerbaer
    On my HP Envy, which has both an Intel on-chip graphics card and an Nvidia Geforce: *-display UNCLAIMED description: 3D controller product: GK208M [GeForce GT 740M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:d2000000-d2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:5000(size=128) memory:b2000000-b207ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d3000000-d33fffff memory:c0000000-cfffffff ioport:6000(size=64) I have trouble with all newer kernels. I basically had to install 12.04 LTS and use their 3.5 kernel family to get the system to boot. The 3.8 from 12.10 or the newest 3.11 from Ubuntu 13.10 leave me with a black screen upon boot. On one occasion I did hear the "log in" sound, but the screen did not display anything. I have purged all nvidia drivers so I guess it should just use the intel drivers, but apparently this is all messed up with newer kernel versions. This is different from the other "nvidia boots into blank screen" bug in that I don't rely solely on an nvidia card. Surely the intel on-chip card should be supported and leave me with something different from a blank screen? Again, it only works with kernel versions 3.5.0-41-generic, not with the 3.11.0-12 one that ships with Ubuntu 13.10. When I go into the grub menu and change the boot options from 'quiet splash' to 'nomodeset' I am able to boot the system, but then I don't get any graphics and trying 'sudo service lightdm start' doesn't succeed (I get 100% CPU for apport, but this doesn't do anything either, so I kill it). Help, I'm all out of ideas. EDIT: Let me add that I'm using the EFI boot system and have a dual-boot installation with Windows 8.

    Read the article

  • Distributed Computing - Hybrid Systems Considerations

    When the Cloud was new, it was often presented as an 'all or nothing' solution. Nowadays, the canny Systems Architect will exploit the best advantages of 'cloud' distributed computing in the right place, and use in-house services where most appropriate. So what are the issues that govern these architectural decisions? What can SQL Monitor 3.2 monitor?Whatever you think is most important. Use custom metrics to monitor and alert on data that's most important for your environment. Find out more.

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >