Search Results

Search found 5474 results on 219 pages for 'tiered storage'.

Page 70/219 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Is keeping the primary hard disk as disk C: still relevant?

    - by Jeremy French
    Back in the day, floppy disks were a: and if you were lucky b:, then when permanent storage came along c: was the default for hard disks (as I remember it) Now that many computers no longer have floppy disks is it possible to have your primary hard disk as A: is the convention out dated? Removable drives (like DVDs and flash readers) now seem to take lower precedence than permanent storage so it is a bit of an oddity that floppy disks should have higher letters.

    Read the article

  • Relationship between RAM & processor speed

    - by deostroll
    RAM is just used for temporary storage. But since this storage is in the cpu memory (RAM) it is fast. Programs can easily read/write values into it. I've noticed more the RAM less time it takes for the application to load/execute. But doesn't this actually depend of the processor speed (MHz or GHz values). I am wondering what is the science/relationship between processor speed and RAM.

    Read the article

  • Windows DFS File System Clustering

    - by tearman
    We're attempted to set up a high availability network for our file servers, and we're wanting to do a DFS file system cluster using the same back-end storage (our back-end storage has its own clustering mechanisms that it manages itself). The question being, A. how would one go about setting up DFS clustering, and B. how can we get Windows to cooperate with multiple servers accessing the same SAN volumes?

    Read the article

  • Utility for easily disabling/enabling extra hard drives?

    - by SkippyFire
    I just got an Asus G60 laptop, and will be installing an SSD as the primary, and will use the existing HDD as a storage drive. Is there a utility that I can use to turn off/disconnect the storage drive when I'm not using it? Mainly, I want to be able to conserve power when I'm mobile, since the battery life of this laptop is pretty weak. Thanks in advance!

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Amazon EC2 root defaults on EBS

    - by CodeShining
    I'm trying to understand why when launching a new instance Amazon defaults to EBS (8gb root) instead of instance storage. Why do they sell instance storage then if it's not used also to boot the base system? Is it safe to uncheck delete on termination, make it bigger (~50GiB) and keep all files on that EBS instead of creating a new one to make sure data will persist and it will also be usable by another instance?

    Read the article

  • USB Flash not recognised by Windows and BIOS, but works fine in Linux

    - by bbalegere
    I have a Transcend JetFLash 2GB USB Drive.It was working fine and I had been using it occasionally. All of sudden it stopped working in all versions of Windows . The USB Drive is also not recognised by the BIOS.It does not show in the list of bootable devices.(It used show up in the list earlier) However the USB Drive works fine in my Linux Mint 11 OS. Running dmesg gives this [ 941.812192] usb 1-2: new high speed USB device using ehci_hcd and address 4 [ 941.936178] usb 1-2: device descriptor read/64, error -71 [ 942.164188] usb 1-2: device descriptor read/64, error -71 [ 942.380189] usb 1-2: new high speed USB device using ehci_hcd and address 5 [ 942.504138] usb 1-2: device descriptor read/64, error -71 [ 942.732179] usb 1-2: device descriptor read/64, error -71 [ 942.948154] usb 1-2: new high speed USB device using ehci_hcd and address 6 [ 943.364134] usb 1-2: device not accepting address 6, error -71 [ 943.476172] usb 1-2: new high speed USB device using ehci_hcd and address 7 [ 943.892140] usb 1-2: device not accepting address 7, error -71 [ 943.892191] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 944.296190] usb 2-2: new full speed USB device using uhci_hcd and address 3 [ 944.438251] usb 2-2: not running at top speed; connect to a high speed hub [ 944.709928] usbcore: registered new interface driver uas [ 944.729999] Initializing USB Mass Storage driver... [ 944.730509] scsi6 : usb-storage 2-2:1.0 [ 944.730908] usbcore: registered new interface driver usb-storage [ 944.730917] USB Mass Storage support registered. [ 945.736320] scsi 6:0:0:0: Direct-Access JetFlash Transcend 2GB 8.07 PQ: 0 ANSI: 2 [ 945.744547] sd 6:0:0:0: Attached scsi generic sg1 type 0 [ 945.753316] sd 6:0:0:0: [sdb] 3944448 512-byte logical blocks: (2.01 GB/1.88 GiB) [ 945.758274] sd 6:0:0:0: [sdb] Write Protect is off [ 945.758288] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [ 945.765167] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.765181] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 945.784309] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.784323] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.239512] sdb: sdb1 [ 946.257279] sd 6:0:0:0: [sdb] No Caching mode page present [ 946.257292] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.257302] sd 6:0:0:0: [sdb] Attached SCSI removable disk Looks like there is something wrong the USB Drive.It is not recognised in any computer running Windows. Is there any way to fix this? Any idea why this problem occurred ?

    Read the article

  • Optiplex can't find SATA III Controller

    - by Joel Rodgers
    I just purchased a HighPoint Rocket 620 Storage controller- Serial ATA-600- 600 MBps (OEM version) and a OWC SSD: For some reason, my Dell Optiplex 755 bios sees this card as a storage device installed in the x1 PCI Express slot, but I can't get it to boot from it. In fact, I don't even see the boot screen as mentioned by the manual. Any help would be greatly appreciated. FYI, I tried every imaginable BIOS setting, including using legacy mode instead of AHCI.

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • can i find the web hosting company from an ip address ?

    - by ufk
    Hiya. i really hope this question suites serverfault, if not my apologies! I have an ip address, is there a way to find the web hosting service that this ip address belongs to ? I tried using whois and traceroute but no luck so far. the case is that my friend bought a domain and storage several years ago and he can't remember where he bought the storage from. thanks!

    Read the article

  • Freeware (preferably open-source) tool for creating multi-file spanning archives as a self merging SFX

    - by Lockszmith
    I have a large file I want to transfer using either Internet storage hosting, DVD-Rs or USB storage, which sometimes is limited to FAT file-systems (for example: mobile phones) What I'm basically looking for is a tool that create multiple files/volumes (less than 2GB each - FAT's file size limit) which are packed with a self-extracting executable. Currently the only tool I found doing this is WinRAR, but that's shareware, and not free. Is there any Free, preferably Open-Source tool that does that? Thank in advance

    Read the article

  • How to attach files to an email in Windows Phone 7.5 Mango

    - by Vaibhav Garg
    In the default email client in Windows Phone 7.5 Mango, how can arbitrary files(.zip, .mp3, .txt, .pdf etc) be attached. As the storage is sand-boxed, the file handler can implement hooks to the email client, as MS-Office does and Adobe Reader doesn't, but the email client can not access files in the Phone's storage. Is there a way, or a work around? I my usage pattern, I tend to send a lot of pdfs, and am unable to do that!

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Interview question: Develop an application that can display trail period expires after 30 days witho

    - by Algorist
    Hi, I saw this question in a forum about how an application can be developed that can keep track of the installation date and show trail period expired after 30 days of usage. The only constraint is not to use the external storage of any kind. Question: How to achieve this? Thanks Bala --Edit I think its easy to figure out the place to insert a question work. Anyway, I will write the question clearly. "external storage" means don't use any kind of storage like file, registry, network or anything. You only have your program.

    Read the article

  • Splitting assemblies - finding the balance (avoiding overkill)

    - by M.A. Hanin
    I'm writing a wide component infrastructure, to be used in my projects. Since not all projects will require every component created, I've been thinking of splitting the component into discrete assemblies, so that every application developed will only be deployed with the required assemblies. I assume that creating an assembly has some storage overhead (the assembly's code, wrapping whatever is inside). Therefore, there must be some limit to the advantage gained by splitting an assembly - a certain point where splitting the assembly is worse than keeping it united (storage-wise and performance-wise). Now, here is the question: how do I know when splitting an assembly is an overkill? P.S I guess there are other overheads to assembly splitting, aside from the storage overhead. If anyone can point out these overheads, it would be much appreciated.

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • python optparse, how to include additional info in usage output?

    - by CarpeNoctem
    Using python's optparse module I would like to add extra example lines below the regular usage output. My current help_print() output looks like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components I would like it to include usage examples for the less *nix literate users at my work. Something like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components Examples: check_dell -c all check_dell -c fans memory voltage check_dell -s How would I accomplish this? What optparse options allow for such? Current code: import optparse def main(): parser = optparse.OptionParser() parser.add_option('-s', '--storage', action='store_true', default=False, help='checks virtual and physical disks') parser.add_option('-c', '--chassis', action='store_true', default=False, help='checks specified chassis components') (opts, args) = parser.parse_args()

    Read the article

  • Remote DocumentRoot in Apache gives a 404

    - by kshouler
    I have the following specified in my httpd.conf, but I get a 404 when attempting to connect to the server from another machine. If I set the docroot to the default htdocs directory, everything works fine. (note.. I've also tried replacing the "//storage/data1" part of the path with the network drive letter "U:") ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" DocumentRoot "//storage/data1/Engineering/Product Development" <Directory "//storage/data1/Engineering/Product Development"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • clearcase option for view movement from one host path to another

    - by wrapperm
    Hi all, I have created a clearcase dynamic view for my development by name "view1". I have mistakenly selected the view storage location as a local PC in my network, that was made sharable by the PC owner. I was suppose to select the view storage location to be a server. Now, the issue is that I have done lot of development with the view that I have created and have plenty of view DO's and view private files in it. So I'm ruling out the option of deleting the view from the PC local storage (host path) and then creating another view in the server with the same config spec. Please, let me know if there is any method of editing the view properties (or doing something else) by which I could be able to move the view to the server (with all the DO's and view private files retained) Thanks in advance, Rahamath

    Read the article

  • Why SELECT N + 1 with no foreign keys and LINQ?

    - by Daniel Flöijer
    I have a database that unfortunately have no real foreign keys (I plan to add this later, but prefer not to do it right now to make migration easier). I have manually written domain objects that map to the database to set up relationships (following this tutorial http://www.codeproject.com/Articles/43025/A-LINQ-Tutorial-Mapping-Tables-to-Objects), and I've finally gotten the code to run properly. However, I've noticed I now have the SELECT N + 1 problem. Instead of selecting all Product's they're selected one by one with this SQL: SELECT [t0].[id] AS [ProductID], [t0].[Name], [t0].[info] AS [Description] FROM [products] AS [t0] WHERE [t0].[id] = @p0 -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [65] Controller: public ViewResult List(string category, int page = 1) { var cat = categoriesRepository.Categories.SelectMany(c => c.LocalizedCategories).Where(lc => lc.CountryID == 1).First(lc => lc.Name == category).Category; var productsToShow = cat.Products; var viewModel = new ProductsListViewModel { Products = productsToShow.Skip((page - 1) * PageSize).Take(PageSize).ToList(), PagingInfo = new PagingInfo { CurrentPage = page, ItemsPerPage = PageSize, TotalItems = productsToShow.Count() }, CurrentCategory = cat }; return View("List", viewModel); } Since I wasn't sure if my LINQ expression was correct I tried to just use this but I still got N+1: var cat = categoriesRepository.Categories.First(); Domain objects: [Table(Name = "products")] public class Product { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int ProductID { get; set; } [Column] public string Name { get; set; } [Column(Name = "info")] public string Description { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "productId", ThisKey = "ProductID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Category> Categories { get { return (from pc in ProductCategories select pc.Category).ToList(); } } } [Table(Name = "products_menu")] class ProductCategory { [Column(IsPrimaryKey = true, Name = "products_id")] private int productId; private EntityRef<Product> _product = new EntityRef<Product>(); [System.Data.Linq.Mapping.Association(Storage = "_product", ThisKey = "productId")] public Product Product { get { return _product.Entity; } set { _product.Entity = value; } } [Column(IsPrimaryKey = true, Name = "products_types_id")] private int categoryId; private EntityRef<Category> _category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_category", ThisKey = "categoryId")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } [Table(Name = "products_types")] public class Category { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int CategoryID { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "categoryId", ThisKey = "CategoryID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Product> Products { get { return (from pc in ProductCategories select pc.Product).ToList(); } } private EntitySet<LocalizedCategory> _LocalizedCategories = new EntitySet<LocalizedCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_LocalizedCategories", OtherKey = "CategoryID")] public ICollection<LocalizedCategory> LocalizedCategories { get { return _LocalizedCategories; } set { _LocalizedCategories.Assign(value); } } } [Table(Name = "products_types_localized")] public class LocalizedCategory { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int LocalizedCategoryID { get; set; } [Column(Name = "products_types_id")] private int CategoryID; private EntityRef<Category> _Category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_Category", ThisKey = "CategoryID")] public Category Category { get { return _Category.Entity; } set { _Category.Entity = value; } } [Column(Name = "country_id")] public int CountryID { get; set; } [Column] public string Name { get; set; } } I've tried to comment out everything from my View, so nothing there seems to influence this. The ViewModel is as simple as it looks, so shouldn't be anything there. When reading this ( http://www.hookedonlinq.com/LinqToSQL5MinuteOVerview.ashx) I started suspecting it might be because I have no real foreign keys in the database and that I might need to use manual joins in my code. Is that correct? How would I go about it? Should I remove my mapping code from my domain model or is it something that I need to add/change to it? Note: I've stripped parts of the code out that I don't think is relevant to make it cleaner for this question. Please let me know if something is missing.

    Read the article

  • Can I write to different jetty databases using JPA that is using the same "entity class"

    - by Per
    I am using Java persistance and there EntityManager class and have it assigned to storage a class object that shall be written to the database. My problem is that I want to write to different databases using the same storage class. My solution to that was to write a StorageManagerfactory that has a Map holding all EntityManagers. The solution looked good until I looked at the databases and realized that all information (undepending of the Map, which gets the correct value) was written to the same database (one of the initialised in the Map). So my question is: Can I write to different databases using JPA that is using the same storage class (the class holding the structure of my database)? Thanks

    Read the article

  • Apache redirect when users home directory is completely empty.

    - by Scott M
    I work for an ISP and I have a server with thousands of users 10MB of free storage. They get this free storage with every e-mail account they have with us. An example of a users storage address: http://users.example.com/~username/ One problem I can see is scanning the server for user names to see what accounts are available, basically getting a list of all our customers valid e-mail addresses. This would be very, very bad. So I'm wanting to redirect to our homepage if someone comes across a users account that is empty (I'd say 90% of them are completely empty). I also do not want to simply -Indexes them and use a custom 403 because the few customers that do use them, want +Indexes. I know I can always just tell the customers to put a htaccess file in their directory with Options +indexes if they want directory listing, but that's a last resort. How can I make it pretty much impossible to tell what accounts are on the server but not in use at all?

    Read the article

  • Payment Processors - What do I need to know if I want to accept credit cards on my website?

    - by Michael Pryor
    This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK).

    Read the article

  • What algorithms compute directions from point A to point B on a map?

    - by A. Rex
    How do map providers (such as Google or Yahoo! Maps) suggest directions? I mean, they probably have real-world data in some form, certainly including distances but also perhaps things like driving speeds, presence of sidewalks, train schedules, etc. But suppose the data were in a simpler format, say a very large directed graph with edge weights reflecting distances. I want to be able to quickly compute directions from one arbitrary point to another. Sometimes these points will be close together (within one city) while sometimes they will be far apart (cross-country). Graph algorithms like Dijkstra's algorithm will not work because the graph is enormous. Luckily, heuristic algorithms like A* will probably work. However, our data is very structured, and perhaps some kind of tiered approach might work? (For example, store precomputed directions between certain "key" points far apart, as well as some local directions. Then directions for two far-away points will involve local directions to a key points, global directions to another key point, and then local directions again.) What algorithms are actually used in practice? PS. This question was motivated by finding quirks in online mapping directions. Contrary to the triangle inequality, sometimes Google Maps thinks that X-Z takes longer and is farther than using an intermediate point as in X-Y-Z. But maybe their walking directions optimize for another parameter, too? PPS. Here's another violation of the triangle inequality that suggests (to me) that they use some kind of tiered approach: X-Z versus X-Y-Z. The former seems to use prominent Boulevard de Sebastopol even though it's slightly out of the way. (Edit: this example doesn't work anymore, but did at the time of the original post. The one above still works as of early November 2009.)

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >