Search Results

Search found 8770 results on 351 pages for 'sun zfs storage 7000 appl'.

Page 30/351 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Template compilation error in Sun Studio 12

    - by Jagannath
    We are migrating to Sun Studio 12.1 and with the new compiler [ CC: Sun C++ 5.10 SunOS_sparc 2009/06/03 ]. I am getting compilation error while compiling a code that compiled fine with earlier version of Sun Compiler [ CC: Sun WorkShop 6 update 2 C++ 5.3 2001/05/15 ]. This is the compilation error I get. "Sample.cc": Error: Could not find a match for LoopThrough(int[2]) needed in main(). 1 Error(s) detected. * Error code 1. CODE: #include <iostream> #define PRINT_TRACE(STR) \ std::cout << __FILE__ << ":" << __LINE__ << ":" << STR << "\n"; template<size_t SZ> void LoopThrough(const int(&Item)[SZ]) { PRINT_TRACE("Specialized version"); for (size_t index = 0; index < SZ; ++index) { std::cout << Item[index] << "\n"; } } /* template<typename Type, size_t SZ> void LoopThrough(const Type(&Item)[SZ]) { PRINT_TRACE("Generic version"); } */ int main() { { int arr[] = { 1, 2 }; LoopThrough(arr); } } If I uncomment the code with Generic version, the code compiles fine and the generic version is called. I don't see this problem with MSVC 2010 with extensions disabled and the same case with ideone here. The specialized version of the function is called. Now the question is, is this a bug in Sun Compiler ? If yes, how could we file a bug report ?

    Read the article

  • Azure Blobs - ArgumentNullException when calling UploadFile()

    - by Ariel
    I’m getting the following exception when trying to upload a file with the following code: string encodedUrl = "videos/Sample.mp4" CloudBlockBlob encodedVideoBlob = blobClient.GetBlockBlobReference(encodedUrl); Log(string.Format("Got blob reference for {0}", encodedUrl), EventLogEntryType.Information); encodedVideoBlob.Properties.ContentType = contentType; encodedVideoBlob.Metadata[BlobProperty.Description] = description; encodedVideoBlob.UploadFile(localEncodedBlobPath); I see the "Got blob reference" message, so I assume the reference resolves correctly. Void Run() C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs (40) System.ArgumentNullException: Value cannot be null. Parameter name: value at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFile(String fileName, BlobRequestOptions options) at EncoderWorkerRole.WorkerRole.ProcessJobOutput(IJob job, String videoBlobToEncodeUrl) in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 144 at EncoderWorkerRole.WorkerRole.Run() in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 40 Interestingly, I'm running that same snippet from an on-premises server i.e., outside of Azure and it works correctly. Ideas welcome, thanks!

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • Solaris Administration Web GUI?

    - by Robert C
    I recently installed Solaris 11 x86 text install (http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html?ssSourceSiteId=ocomen) to be used as a file server running ZFS. I noticed that I'm given the bare minimum in terms of packages. Is there an official oracle web GUI for managing ZFS? I ran a netstat and it doesn't appear to have installed any webserver thats listening. I saw something from a couple years ago, but apparently it's not packaged or maintained anymore (https://blogs.oracle.com/talley/entry/manage_zfs_from_your_browser). I tried pkg install network-console, but it says that the package isn't available for my platform. Any ideas? I'd like to stick with Oracle Solaris instead of the open source alternatives, if possible.

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

  • eclipse IDE won't start in ubuntu 10.04 64bit with sun-java

    - by aeischeid
    i get this error when I try to run from CLI ** (Eclipse:2318): CRITICAL **: menu_proxy_module_load: assertion `dbusproxy != NULL' failed # A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00007f34511a4e74, pid=2318, tid=139863603410704 # JRE version: 6.0_18-b18 Java VM: OpenJDK 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 ) Derivative: IcedTea6 1.8 Distribution: Ubuntu lucid (development branch), package 6b18-1.8-0ubuntu1 Problematic frame: C [libglib-2.0.so.0+0x41e74] g_main_context_prepare+0x164 # An error report file with more information is saved as: /home/aaron/opt/eclipse/hs_err_pid2318.log # If you would like to submit a bug report, please include instructions how to reproduce the bug and visit: https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ The crash happened outside the Java Virtual Machine in native code. See problematic frame for where to report the bug. # I have sun-java installed and set in my PATH: ~: echo $JAVA_HOME /usr/lib/jvm/java-6-sun/

    Read the article

  • Why did my zpool replace never finish and what should I do now?

    - by Josh
    I have a ZFS zpool with two disks in a mirror configuration, da0 and da1. da1 failed, and so I replaced it with da2 using zpool replace BearCow da1 da2 This ran for a few hours, during which zpool status showed that the array was being resilvered. When that finished, zpool status showed that the resilver was completed, but the array was still degraded... I tried a zpool scrub and a zpool clear, but the array still shows as degraded: [root@chef] ~# zpool status BearCow pool: BearCow state: DEGRADED scrub: scrub completed after 0h20m with 0 errors on Tue Oct 9 16:13:27 2012 config: NAME STATE READ WRITE CKSUM BearCow DEGRADED 0 0 0 mirror DEGRADED 0 0 0 da0 ONLINE 0 0 0 replacing DEGRADED 0 0 0 da1 OFFLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors I can't zpool replace BearCow da1 da2 anymore because da2 is already a member of BearCow... This is FreeBSD (FreeNAS) running ZFS pool version 15. How do I get my array to show as healthy again?

    Read the article

  • Nexenta under KVM?

    - by Nick
    I have an Ubuntu Server running KVM. I'd like to get the benefits of ZFS so I was thinking of installing a virtual machine under KVM running Nexenta (or NexentaStor), allowing that virtual machine to have raw access to a couple of physical hard disks, and then having it share its file system with NFS so that Ubuntu can access it. I've never tried setting up KVM so that the virtual machine has access to physical drives. Does this sound feasible, and is there anything I need to watch out for? Has someone already documented something like this? Does Nexenta/ZFS function basically as well in the virtual environment as if they were running base bones? I can take a small performance hit, but I don't want it to not be as reliable because of the virtualization. Thanks.

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • Create Hello World with RESTful web service and Jersey

    - by Harry Pham
    I follow tutorial here on how to create web service using RESTful web service and Jersey and I get kind of stuck. The code is from HelloWorld3 in the tutorial I linked above. Here is the code. I use Netbean6.8 + glassfish v3 RESTGreeting.java create using JAXB. This class represents the HTML message in Java package com.sun.rest; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlElement; @XmlRootElement(name = "restgreeting") public class RESTGreeting { private String message; private String name; /** * Creates new instance of Greeting */ public RESTGreeting() { } /* Create new instance of Greeting * with parameters message and name */ public RESTGreeting( String message, String name) { this.message = message; this.name = name; } /** Getter for message * return value for message * */ @XmlElement public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } /* Getter for name * return name */ @XmlElement public String getName() { return name; } public void setName(String name) { this.name = name; } } HelloGreetingService.java creates a RESTful web service that returns an HTML message package com.sun.rest; import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Consumes; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.GET; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; @Path("helloGreeting") public class HelloGreetingService { @Context private UriInfo context; /** Creates a new instance of HelloGreetingService */ public HelloGreetingService() { } /** * Retrieves representation of an instance of com.sun.rest.HelloGreetingService * @return an instance of java.lang.String */ @GET @Produces("text/html") public RESTGreeting getHtml(@QueryParam("name") String name) { return new RESTGreeting( getGreeting(), name); } private String getGreeting() { return "Hello "; } /** * PUT method for updating or creating an instance of HelloGreetingService * @param content representation for the resource * @return an HTTP response with content of the updated or created resource. */ @PUT @Consumes("text/html") public void putHtml(String content) { } } However when i deploy it on Glassfish, and run it. It generate an exception. I try to debug using netbean 6.8, and figure out that this line return new RESTGreeting(getGreeting(), name); in HelloGreetingService.java cause the exception. But not sure why. Here is the stacktrace javax.ws.rs.WebApplicationException at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:268) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1029) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:941) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:932) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:384) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:451) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:632) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

  • Using XML as data storage

    - by Kian Mayne
    I was thinking about the XML format and the following quote: “XML is not a database. It was never meant to be a database. It is never going to be a database. Relational databases are proven technology with more than 20 years of implementation experience. They are solid, stable, useful products. They are not going away. XML is a very useful technology for moving data between different databases or between databases and other programs. However, it is not itself a database. Don't use it like one.“ -Effective XML: 50 Specific Ways to Improve Your XML by Elliotte Rusty Harold (page 230, Part 4, Item 41, 2nd paragraph) This seems to really stress that XML should not be used for data storage and should only be used for program to program interoperability. Personally, I disagree and .NET's app.config file that's used to store a program's settings is an example of data storage in an XML file. However for databases rather than configurations etc XML should not be used. To develop my point, I will use two examples: A) Data about customers with fields that are all on one level i.e. there are a number of fields all relating to one customer with no children B) Data about configuration of an application where nested fields and properties make a lot of sense So my question is, Is this still a valid statement and is it now acceptable to store data using XML? EDIT: I've sent an email to the author of that quote to ask for his input/extra context.

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • What is good book for administration & configuration of Storage logical arrays?

    - by unknown (yahoo)
    I am looking for a book which can explain pros and cons of different combination of configurations/policies of storage Arrays and may also suggest some best practices for certain scenarios for e.g. when data availability & security is very important. There are a lot of "books for dummy" but they don't go in depth, I am a more of developer so I would like to understand how and why exactly it works beneath policies & configuration settings. I am working with EMC clarion logical array but I will have to work with EMC Symmetrix or NetApp or any other types of disk arrays.

    Read the article

  • What is the most suitable way to manage iSCSI storage for Virtual Environments?

    - by Gabriel Talavera
    We are planning to place a HP MSA P2000 with two FC/iSCSI controllers in our network. We have two options to provide more storage to Virtual Machines (We are running Hyper-V): A) Add iSCSI targets to the Virtual Hosts and then create VHD that we would add to each guest server. B) Directly add iSCSI targets in each guest server. Just wondering if one of those options is better than the other, and which is the common practice in a virtualized environment. Thanks in advance for any input!

    Read the article

  • Storage servers architectural solution for backup. What is the best way? (pics inside)

    - by Kirzilla
    Hello, What is the best architecture for storage servers array? Needs... a) easy way to add one more server to array b) we don't have single backup server c) we need to have one backup for each "web" part of each server Group #1 : is cross-server-backuping scheme; the main disadvantage that we can't add one more server, we should add 2 servers in one time. Group #2 : is a Group #1, but with three and more servers. It also have a disadvantage - to add one more server we should move existing backup to it. Any suggestions? Thank you. Thank you.

    Read the article

  • are blue ray disks the cheapest storage medium per Gb?

    - by oshirowanen
    The question is as simple as that really. Are bluray disks the cheapest storage medium per gb? I am recording video which is using about 32gb per day. So a month of that would be almost 1Tb. A year around 12tb. I want to store at last a years worth with the possibility of more if needed. To me it seems that cheap bluray disks world be the cheapest solution. But I wanted to get this confirmed.

    Read the article

  • File storage service that allows clients to upload large files to my account?

    - by deceze
    Can anyone recommend an online file storage service which fulfills these requirements? I can create an account I can invite clients to upload files into my account clients do not need to register to be able to upload clients must not be able to see anything but their own files or they must not see any files at all, they get only a dropbox only I can access the uploaded files, everything is non-public service is multi-lingual I just need clients to be able to send me potentially large files in a dead simple manner online, that's all. No registration step to go through, no software to download, no synching or sharing. No setting up of individual folders and permissions for each individual client. No copying and pasting of links (a la Mediafire, Rapidshare etc).

    Read the article

  • Any cloud storage service that lets us to authenticate the file when we serve the file to our visito

    - by TORr0t
    Lets say, i want to restrict a file to my visitors. I mean , i have an xx.avi file to be streamed/downloaded, and the visitor paid me for the bandwidth and the size of the file. In amazon s3, i cant control the file at all .(there is a very basic control thing which is not ok for me) Only way is my server can proxy the file, like it fetches the file from amazon s3 storagenode and send it to the owner with authentication approval by a php script. But this way i would double up the bandwidth usage and again there would be latency problem since my server needs to get the file from amazon s3. So i was wondering if there is a better solution or any cloud storage service that lets us to control the file restriction to my visitors. Thanks

    Read the article

  • Java getInputStreat SocketTimeoutException instead of NoRouteToHostException

    - by Jon
    I have an odd issue happening when trying to open multiple Input Streams (in separate threads) on Linux (RHEL). The behaviour works as expected on windows. I am kicking off 3 threads to open https connections to 3 different servers. All three are invalid IP addresses (in this test case), so I expect an NoRouteToHostException for each of them. The first two return these as expected, and quite quickly. (see stack trace below) However the third (and 4th when I tested it that way) do NOT give a no route exception. They wait for ages, and then give a SocketTimeoutException (see other stack trace below). This takes ages to come back, and does not accurately express the connection issue. The offending line of code is: reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); Has anyone seen something like this before? Are there multi-threading issues with sockets on REHL or some limit somewhere to how many can connect at once...or...something? Expected stack trace, as received for first two: java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234) Unexpected stack trace, as received on 3rd: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >