Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 218/553 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • Best Solution For My Requirements

    - by Eray
    Hello, I'm a web developer. I have a few small online web application and a few Wordpress blogs. But i don't have too much experience with installing / configuring web servers . One of my web application needs cron jobs. It will check a lot of web sites availability. And, this application will leech too much RAM. And i think shared-hosting isn't suitable for this. But 1GB storage is enough i think. I don't need too much storage for my web sites. What do you think ? Which hosting solution is more suitable for my requirements ? Reseller ? VPS ? Cloud Server ? etc ...

    Read the article

  • How to create a bash script to check the SSH connection?

    - by chutsu
    I am in the process of creating a bash script that would log into the remote machines and create private and public keys. My problem is that the remote machines are not very reliable, and they are not always up. I need a bash script that would check if the SSH connection is up. Before actually creating the keys for future use.

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • SVN: Checking out a large project over slow connection

    - by far
    Hello, I am new to SVN. I want to check out a very large project over a slow connection which takes ages to download. I have zipped versions of project on both remote server and my local which are identical. Is there an easy and quick way to sync my local project with remote server without a full checkout? Thanks

    Read the article

  • How to import a svn repository underneath a git repository?

    - by Thiago Moreira
    Hi there, I have a svn repository that I migrated to git using the tool svn2git. Now I would like to push this git layout to a remote repository underneath an existing directory. But, I would like to keep the svn history (tags and branches). For instance: Git remote repository layout: git-repository/dirA git-repository/dirB git-repository/dirC/svn-repository-migrated-to-git Makes sense? Is it possible?? Thanks

    Read the article

  • iMac boot from linux partition on external drive

    - by user74757
    I have the following "setup:" iMac (no internal drive/dead) --------- (Firewire) ------- [[MAC OS X]] | | | | (USB) | | | | [[MISC STORAGE PARTITION] [MISC STORAGE PARTITION] [EXT2 UBUNTU PARTITION]] I routinely use the firewire drive to boot MAC OS X. However, I would like to boot from the linux partition of the USB drive. This linux partition had linux installed on it from a live cd, and during that process, I told the installer to install GRUB on the usb drive (which happened to be /dev/sdd). My question is, how do I get this disk to show up during the iMac option-boot? Currently, only the firewire MAC OS X option shows up. I have read about rEFIT, but that appears to install it to the Mac OS X disk (would that still work?)... Also mentioned was installing rEFIT to the internal EFI system partition, but I don't know if that is wise.

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • Many-to-many mapping with LINQ

    - by Alexander
    I would like to perform LINQ to SQL mapping in C#, in a many-to-many relationship, but where data is not mandatory. To be clear: I have a news site/blog, and there's a table called Posts. A blog can relate to many categories at once, so there is a table called CategoriesPosts that links with foreign keys with the Posts table and with Categories table. I've made each table with an identity primary key, an id field in each one, if it matters in this case. In C# I defined a class for each table, defined each field as explicitly as possible. The Post class, as well as Category class, have a EntitySet to link to CategoryPost objects, and CategoryPost class has 2 EntityRef members to link to 2 objects of each other type. The problem is that a Post may relate or not to any category, as well as a category may have posts in it or not. I didn't find a way to make an EntitySet<CategoryPost?> or something like that. So when I added the first post, all went well with not a single SQL statement. Also, this post was present in the output. When I tried to add the second post I got an exception, Object reference not set to an instance of an object, regarding to the CategoryPost member. Post: [Table(Name="tm_posts")] public class Post : IDataErrorInfo { public Post() { //Initialization of NOT NULL fields with their default values } [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } private EntitySet<CategoryPost> _categoryRef = new EntitySet<CategoryPost>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_categoryRef", ThisKey = "ID", OtherKey = "PostID")] public EntitySet<CategoryPost> CategoryRef { get { return _categoryRef; } set { _categoryRef.Assign(value); } } } CategoryPost [Table(Name = "tm_rel_categories_posts")] public class CategoryPost { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_post", DbType = "int", CanBeNull = false)] public int PostID { get; set; } [Column(Name = "fk_category", DbType = "int", CanBeNull = false)] public int CategoryID { get; set; } private EntityRef<Post> _post = new EntityRef<Post>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_post", ThisKey = "PostID", OtherKey = "ID")] public Post Post { get { return _post.Entity; } set { _post.Entity = value; } } private EntityRef<Category> _category = new EntityRef<Category>(); [Association(Name = "tm_rel_categories_posts_fk", IsForeignKey = true, Storage = "_category", ThisKey = "CategoryID", OtherKey = "ID")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } Category [Table(Name="tm_categories")] public class Category { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_parent", DbType = "int", CanBeNull = true)] public int ParentID { get; set; } private EntityRef<Category> _parent = new EntityRef<Category>(); [Association(Name = "tm_posts_fk2", IsForeignKey = true, Storage = "_parent", ThisKey = "ParentID", OtherKey = "ID")] public Category Parent { get { return _parent.Entity; } set { _parent.Entity = value; } } [Column(Name = "name", DbType = "varchar(100)", CanBeNull = false)] public string Name { get; set; } } So what am I doing wrong? How to make it possible to insert a post that doesn't belong to any category? How to insert categories with no posts?

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • Not able to safely remove external disk after having mounted and unmounted a VHD on it

    - by Agnel Kurian
    I am using Windows 7 SP 1. I have an external hard disk (Seagate 500GB) which I am able to use without problems most of the time. I am able to plug it in, use it and then safely unmount it via the "Eject USB Mass Storage Device" option in the taskbar tray. However, if I attach a VHD file located on this disk using "Disk Management", then detach the VHD and finally try to safely disconnect the disk via the system tray, I get an error which says: "Problem Ejecting USB Mass Storage Device: Windows can't stop your 'Generic volume' device because a program is still using it. Close any programs that might be using the device, and then try again later." How do I avoid this problem? Which process could still be accessing the device (even after I have closed the "Disk Management" application) ?

    Read the article

  • How to do both ways integrations across different perfoce depots?

    - by Sorin Sbarnea
    I would like to know how we are supposed to do integration between different perforce servers/depots. I'm looking for a solution that would allow us to do both-ways integrations. Currently I found some information on Using Remote Depots article where they say how to map the remote depot as read only. Is there the only solution to do mappings on both servers? - this means that I could not use a single branch spec to do both ways integrations.

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • 5 x 3GB drives and 4 x 1500GB drive best raid setup?

    - by Zen_Silence
    Hello, I am building a file server my plan is the have the Operating system on one raid partition and the data storage on another partition. I currently have 5 x 3GB IDE drives that i would like to put the operating system on theses drives are old but that doesnt matter to me at the moment i have a ton of them so for this raid partition i would probably want to be able to pull out dead a drive and rebuild the array. My file partition is going to consist of 4 x 1.5TB SATA drives I would like the maximum storage with some redundancy. Any suggestions to which Raid level i should use would be greatly appreciated and if you could also suggest a PCI or PCI-e raid controller to handle theses arrays. Thanks in Advance, Zen_Silence

    Read the article

  • How do I tell cmake to do these two steps to use winpcap?

    - by Gtker
    Quoted from here: If your program uses Win32 specific functions of WinPcap, remember to include WPCAP among the preprocessor definitions. If your program uses the remote capture capabilities of WinPcap, add HAVE_REMOTE among the preprocessor definitions. Do not include remote-ext.h directly in your source files. Has anyone managed to use winpcap with cmake?

    Read the article

  • listing network shares with python

    - by Gearoid Murphy
    Hello, if I explicitly attempt to list the contents of a shared directory on a remote host using python on a windows machine, the operation succeeds, for example, the following snippet works fine: os.listdir("\\\\remotehost\\share") However, if I attempt to list the network drives/directories available on the remote host, python fails, an example of which is shown in the following code snippet: os.listdir("\\\\remotehost") Is anyone aware of why this doesn't work?, any help/workaround is appreciated.

    Read the article

  • remsh rsh error redirect problem

    - by soField
    using following command on hp-ux remsh opera -l myuser crontab -l /opt1/exp_opera_crontab 2/opt/a.log and when i echo $? i get 0 because its executing crontab -l on remote machine but i dont have opt1 directory so export wont be copied to my local machine in /opt1/exp_opera_crontab i dont get any error about this when i run this remsh or rsh command is there any way to identify both of remote and local machine related errors and redirecting them into my local machine ?

    Read the article

  • Partition falsly recognized as RAW

    - by Paul Hiemstra
    On my 2 TB data disk I have two primary partitions, one of 1.6 TB for data storage in Linux (ext3) and one of 300 GB for some additional data storage for Windows. I run a dual-boot Windows 7/Ubuntu 12.04 install. The issue I have that if I start my computer into Windows 7, bot the partitions on my 2TB data drive are not recognized. In stead, Windows 7 sees one 1TB partition with type RAW. However, if I reboot to Linux, and then back to Windows 7, the partitions are correctly recognized. The following two screenshots illustrate my situation. Before I reboot to linux: and after the reboot: I have two questions: What could cause this behavior? How can I solve this issue.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >