Search Results

Search found 8523 results on 341 pages for 'sun storage 7000 scriptin'.

Page 54/341 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • DD-WRT/openwrt question

    - by Shiki
    Can I squeeze more speed out of my router (when it comes to USB attached storage device on it) with open/DD wrt? (Sorry I don't really know such firmwares.) (Guess it works with ntfs-3g? I don't know.) Feel free to make this a real question. Basically the question: Does the change worth it in the terms of speed? (My router is a TP-Link WR1043N. Edited it out of the question since it would make it too specified.)

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

  • Way to auto resize photos before uploaded to cloud service?

    - by AndroidHustle
    I love using auto syncing services to have my photos taken with my smartphone stored on a cloud storage service. One problem though is that the photos are uploaded in high resolution and takes up a lot of space on the drive. I wonder if any one knows of a service/strategy to have the auto uploaded photo resized to have it occupy less space when auto stored? That is, without me having to take the photos with lesser quality, I still want photos taken with the highest quality since I may take a photo I really like.

    Read the article

  • Dell EqualLogic vs. EMC VNXe [closed]

    - by Untalented
    We've been looking into SMB SANs and based on the competitive pricing I've been getting we're really liking these two array's. There are some pro's to both solutions, but I've unable to really decide which to choose. The EMC offers better expandability since you can buy an additional shelf (roughly $1200) and can add drives then to the array. However, the Dell unit is still very nice. Can anyone comment on their experiences with the two and thoughts on this? Also, to get the VMware Storage API support you need VMware Enterprise. How much additional performance does this provide? It's roughly $15k more than the Essentials Plus bundle we're looking at (this is a small environment [3 Hosts 1 Array].

    Read the article

  • XenServer/Center: Shared SRs for hosts not in same pool?

    - by 3molo
    I would like to use the same SRs on XenServer hosts that are not able to be part of the same pool (because of not having the exact same cpu feature set, if I understand it correctly) in order to share templates, being able to (manually) start a host on another node, backing up running hosts on other hardware etc etc. The technology for SR can be any of iSCSI, NFS or CIFS, iSCSI would obviously be preferred. Trying to add an iSCSI volume renders a "This LUN is already in use as SR iSCSI - Shared Storage on pool xxxxxx.". Adding a NFS share on one XS host, creating a template there and then checking another XS host reveals they don't agree on used space etc. Coming from a vSphere world this is quite baffling, but if these are limitations then I will have to rethink some of the concepts for this low budget project.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Intel SASWT4I SAS/SATA Controller Question

    - by Joe Hopfgartner
    Hey there! I want to assemble a cheap storage sytem based on the Norco RPC-4020 Case. When searching for controllers I found this one: Intel® RAID Controller SASWT4I This is a quote form the Spec Sheet: Scalability. Supports up to 122 physical devices in SAS mode which is ideal for employing JBODs (Just a Bunch Of Disks) or up to 14 devices in RAID 0, 1, 1E/10E mode through direct connect device attachment or through expander backplane support. Does that mean I can attatch 14 SATA drives directly to the controller using SFF-8087 - 4x SATA breakout cables? That would be nice because then I can choose a mainboard that has 6 Onboard SATA and i can connect all 20 bays while only spending 155$ on the controller and like another 100$ on cables. Would that work? And why is it 14 and not 16 when there are 4 Ports? I am really confused about all the breakout/fanout/(edge-)expanding/multiplying/channel stuff...

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • Areca 1880ix RAID hangs

    - by Dave
    Areca RAID controller ARC-1880ix-12 (firmware 1.50) hangs when on high load. My setup is: Chenbro 3U chassis Intel S5500BC mainboard Xeon 5603 CPU 16GB of RAM 12 Seagate SAS drives ST32000645SS (2 of them as hot spare, 10 as RAID10) Mellanox Infiniband HBA card This server is working as external infiniband storage for Xen VMs. When load is quite big Areca's firmware hangs - it becomes unreachable even from Areca's ethernet adapter. After resetting the server power it returns to normal operation. While Areca is hanged I can confirm that it is powered (ethernet link is active) and Infiniband HBA works Ok. Thanks in advance for any idea or suggestion where the problem might be!

    Read the article

  • Feedback on available mid-to-enterprise level desktop backup solutions [closed]

    - by user85610
    I am involved in the creation of a new backup solution to replace our current Retrospect setup, which has become a significant time sink to administer. We have almost 200 desktop and some laptop clients, both Windows and OS X. We're only interested in products oriented around disk-to-disk, and would be integrate well with our current set of nine NAS devices as target storage. I'd just like some feedback from anyone out there, as it's sometimes difficult otherwise to find objective reviews of software at this level. Both data and time are important enough that we need a reliable solution which won't be prone to self-destruction as often as Retrospect. Bonus points for de-duplication, which might help squeeze more service time out of our NAS setup in terms of capacity. Currently considering Commvault and Netbackup. Many other products I've seen don't have an OSX client. Any thoughts?

    Read the article

  • What is the best filesystem for storing thousands of files in one dictionary-like id-blob structure?

    - by Ivan
    What filesystem best suits my needs? Thousands or even millions of files in one directory. Good (ext4 & ntfs level or close) reliability (incl. fault tolerance) and access speed. No directories actually needed, as well as descriptive names, just a dictionary-like structure of id-blob pairs is all I need. No links, attributes, and access control features needed. The purpose is a file storage where all the metadata (data describing all the facts about what the file actually contains and who can access it) is stored in a MySQL database. As far as I know common filesystems like NTFS and ext3/4 can go dead-slow if there are too many files placed in one directory - that's why I ask.

    Read the article

  • What is the value/cost of enabling "spread spectrum clocking" on my hard drives?

    - by Stu Thompson
    I'm building up a biggish NAS box (10x WD RE4 2TB SATA RAID10) and ran into some problems. During the course of my research, debugging, investigations, etc, I discovered a jumper on the physical drives labeled "spread spectrum clocking". After some googling about this on teh internets, it seems to be a feature that some suggest (without reference or explanation) enabling in 'a storage configuration' that makes the drive less sussesptable to EMI. But why? I've got three core questions: Why is this feature not enabled by default? What are the actual benefits? Are there any costs?

    Read the article

  • Swap files in Cloud Infrastructures

    - by ffeldhaus
    At our company we set up an OpenStack Cloud and are currently creating internal guidelines for creation of OS templates / images. One controversial topic was if we should provide swap inside the VM templates. Therefore I'd like to ask the following questions From an elastic Cloud provider point of view, does it make sense to offer swap partitions / files in the VM templates or is swap not needed when a VM can be resized? Which scenarios necessarily demand a swap file to be present? What kind of Storage should be used for swap files (e.g. local / central, FC / iSCSI / NFS)? Are there any best practices for offering swap files in a performant way in Cloud Infrastructures?

    Read the article

  • Should you archive documents before backing up to the cloud?

    - by gabbsmo
    I'm planning to add a cloud storage to my personal backup strategy. But now I wonder if it really is worth the trouble of compressing my documents and photos. The Open XML-format already have zip-compression and JPEG is a lossy image format. So there really isn't much benefit in compressing. 20Mb of documents become about 17Mb at the ULTRA preset of 7-zip. One benefit I can imagine is that you can shorten upload time by archiving the folders since it minimizes the number of requests that is needed to be sent to the server at upload and download. So what are your thoughts and experience in this issue? Should you archive your documents before backing them up to the cloud?

    Read the article

  • Safest snapshot of a failing harddrive?

    - by ironfroggy
    I have a headless machine that stopped booting, so I pulled it out for diagnostics and got a message that one of the harddrives was about to fail, so I pulled them all out and I need to get everything off, before figuring out which I need to get rid of. I wasn't sure which drive was failing, because it only said "Harddrive 1" and I don't know which it referred to. I'm wondering the best way to get everything off. I'm worried if I copy everything, I could get corrupt data and not realize some files are wrong until the drive is completely out of commission. What are my best options to get everything off in a way I can safely move to new storage?

    Read the article

  • Can different drive speeds and sizes be used in a hardware RAID configuration w/o affecting performance?

    - by R. Dill
    Specifically, I have a RAID 1 array configuration with two 500gb 7200rpm SATA drives mirrored as logical drive 1 (a) and two of the same mirrored as logical drive 2 (b). I'd like to add two 1tb 5400rpm drives in the same mirrored fashion as logical drive 3 (c). These drives will only serve as file storage with occasional but necessary access, and therefore, space is more important than speed. In researching whether this configuration is doable, I've been told and have read that the array will only see the smallest drive size and slowest speed. However, my understanding is that as long as the pairs themselves aren't mixed (and in this case, they aren't) that the array should view and use all drives at their actual speed and size. I'd like to be sure before purchasing the additional drives. Insight anyone?

    Read the article

  • UnsatisfiedLinkError on xawt when running HEC-HMS.sh

    - by G.Oxsen
    I am a recent adopter of Linux and this problem has got me stumped. I use HEC-HMS and HEC-DSSVue for work on a regular basis. I have been using the widows versions in wine but they are really buggy. So I decided to try out the linux versions. the links below will take you to the download pages for these two programs. They are free programs for Hydrology and data management. Once I install them and attempt to run the shell file (HEC-HMS.sh for example) I get a ton of java errors that I do not understand. If I had to guess I would say that the java files in question can not be found. When I check to see if java is installed it is. Here is the output from the terminal from trying to run HEC-HMS.sh: Exception in thread "Thread-1" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at java.awt.Component.<clinit>(Unknown Source) at javax.swing.ImageIcon.<clinit>(Unknown Source) at hms.i.c(Unknown Source) at hms.i.b(Unknown Source) at hms.K.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-4" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Unknown Source) at java.awt.Toolkit.<clinit>(Unknown Source) at sun.print.CUPSPrinter.<clinit>(Unknown Source) at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at sun.print.UnixPrintServiceLookup.refreshServices(Unknown Source) at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(Unknown Source) Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Toolkit at java.awt.Color.<clinit>(Unknown Source) at hms.model.l.<init>(Unknown Source) at hms.model.ProjectManager.<init>(Unknown Source) at hms.Hms.<init>(Unknown Source) at hms.Hms.main(Unknown Source) Exception in thread "Thread-2" java.lang.NoClassDefFoundError: Could not initialize class sun.print.CUPSPrinter at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at javax.print.PrintServiceLookup.lookupDefaultPrintService(Unknown Source) at hms.util.f.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I get similar outputs when I try to run HEC-DSSVue.sh. If anyone could shed some light on a solution I would really appreciate it. The problem turned out to be that the program needed 32 bit versions of the particular dependencies.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • SFTP - Unable to Overwrite File - "EOF received from remote side"

    - by NateReid
    I am working with a customer to troubleshoot an error they have when trying to overwrite/"PUT" a file to our SFTP site. When the root directory is empty and they try to upload the file there is no problem but when they try to overwrite an existing file this is when the error occurs. The error they receive when trying a put command in Java Caps is: The error is: Batch SFTP eWay error when doing data transfer operation in [PUT()], message=[EOF received from remote side [Unknown cause]].|#] When they use WinSCP or FileZilla to put the file it overwrites fine with no errors. We have tried: Multiple different files Checking their SFTP user permissions Gave full access permissions to "everyone" to the user's root directory in Windows Recreating their user account Ensured no other processes are using/locking the files that are being overwriten We are using Cerberus Professional FTP server software. Any ideas of what else we could try?

    Read the article

  • "Attach to native process failed" with Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit

    - by Richard
    In trying to install Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit, the dialogue appears as follows. # export JAVAHOME=/usr/java/jdk1.6.0_24/; echo $JAVAHOME /usr/java/jdk1.6.0_24/ # ./setup Launching installer... Attach to native process failed On the server we have the following JREs and I've tried both. $ sudo rpm -qa | egrep "(openjdk|icedtea)" java-1.6.0-openjdk-1.6.0.0-1.27.1.10.8.el5_8 And SElinux appears to be off: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted

    Read the article

  • VirtualBox guest network lost after host disconnects

    - by webjunk
    I am running VirtualBox both on a Snow Leopard OSX host machine and on a Windows Vista host machine. Whenever my host machines lose internet connection the guest machines seem to lose internet connectivity permanently even after the host connection to the Internet is reestablished. Resetting guest networking on the guest os, disconnecting cable via host virtualbox settings, and even restarting the guest OS do not help at all. The guest no longer can access the Internet. The only solution is restarting VirtualBox itself while the host is connected to the Internet. This really gets to be a pain when the host goes into sleep mode or I disconnect my laptop at work and then reconnect at home. Guests are setup with NAT networking. It affects guest machines with both Ubuntu and Windows XP OS'es. Is this expected behavior? Does anyone know of a fix? Or am I setup incorrectly?

    Read the article

  • SunOne case-insensitive URLs

    - by RoToRa
    It it possible to configure a SunOne web server to automatically redirect all URLs with capital letters to the corresponding lower case URLs? For example, redirect /Example, /eXamPle and /EXAMPLE all to /example. This would have to be for all URLs (or at least a subset excluding a specific prefix) I normally have nothing to do with web server configuration (especially not SunOne). I just need to now if it is generally possible and be pointed to the right direction on how to do this. Thanks.

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Java issues with Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit

    - by Richard
    In trying to install Apache 2.0 Agent 2.202 for RHEL5 Linux 64bit, the dialogue appears as follows. $ ./setup Error : java is not present in path. Please enter JAVAHOME path to pick up java:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/ Launching installer... Attach to native process failed $ ./setup Error : java is not present in path. Please enter JAVAHOME path to pick up java:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib ./setup: line 80: [: 107:: integer expression expected ./setup: line 83: [: 107:: integer expression expected Error : Incorrect java version (1.2.2 or above is needed). Please enter JAVAHOME path to pick up java: On the server we have the following JREs and I've tried both. $ sudo rpm -qa | egrep "(openjdk|icedtea)" java-1.6.0-openjdk-1.6.0.0-1.27.1.10.8.el5_8 $ find 2>/dev/null | grep -i '/jre/' ./usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre/bin/ ... ./usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/ Any suggestions? I know I'm overlooking something. In previous searches I've only found one other posting that comes close but it has no responses (http://forum.parallels.com/showthread.php?t=76556).

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >