Search Results

Search found 8770 results on 351 pages for 'sun zfs storage 7000 appl'.

Page 248/351 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Signature of Collections.min/max method

    - by Marco
    In Java, the Collections class contains the following method: public static <T extends Object & Comparable<? super T>> T min(Collection<? extends T> c) Its signature is well-known for its advanced use of generics, so much that it is mentioned in the Java in a Nutshell book and in the official Sun Generics Tutorial. However, I could not find a convincing answer to the following question: Why is the formal parameter of type Collection<? extends T>, rather than Collection<T>? What's the added benefit?

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • uWSGI loggin format unification

    - by Mediocre Gopher
    I'm attempting to unify the log format of my uwsgi instance. Currently there's three different types of log items: Sun Sep 2 17:31:00 2012 - spawned uWSGI worker 10 (pid: 2958, cores: 8) (DEBUG) 2012-09-02 17:31:01,526 - getFileKeys_rpc called Traceback (most recent call last): File "src/dispatch.py", line 13, in application obj = discovery(env) File "src/dispatch.py", line 23, in discovery ret_obj = {"return":dispatch(method,env)} File "src/dispatch.py", line 32, in dispatch raise Exception("test") Exception: test The first is an error spawned by uWSGI internally (I have the --log-date option set). The second is from the logging module, which has logging.basicConfig(format='(%(levelname)s) %(asctime)s - %(message)s') set. The final one is an uncaught exception. I understand that the uncaught exception probably can't be formatted, but is there some way of having uwsgi use the logging module for its internal logs? Or the other way around?

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Ways to increase my Ubuntu partition space

    - by Andreas Grech
    I am currently running Ubuntu and Windows 7 as dual-boot on a single HD. The problem is that when I installed Ubuntu, I didn't allocate as much space as I thought I would need and now I need 'reinstall' Ubuntu so that I can increase the amount of storage space. Now there are two ways to go about this. Either I use use gparted to increase my partition space (but I read that it's not really that safe as regards data loss) or create the new partition with more space and reinstall Ubuntu there. But if want to reinstall Ubuntu, is there a way I can somehow "save" my current Ubuntu and install that one? What I mean is that I don't want to lose my current installed packages and files that I have on this partition. Is there a way to kind of maybe 'streamline' my current Ubuntu so that I install this one on the new partition? If not, what are your opinions as regards gparted?

    Read the article

  • Parity Initialization after putting in two new disks

    - by lbanz
    All my firmware is up to date on the server and the controllers. Storage crashed over the weekend. I rebooted it and it detected that I put in two new disks last week (I did check that both disk completed the rebuilding process last week). After it booted into the OS I see that it gave me an information message. After 18 hours it is at 54% so it is looking healthy. But I need to replace 5 more disk in the msa. Should I wait for this message to finish before replacing more disks? 785 Background parity initialization is currently queued or in progress on Logical Drive 1 (15.0 TB, RAID 5). If background parity initialization is queued, it will start when I/O is performed on the drive. When background parity initialization completes, the performance of the logical drive will improve.

    Read the article

  • java.io in debian

    - by Stig
    Hello, i try to compile a java program but in the import section of the code fails: import java.net.; import java.io.; import java.util.; import java.text.; import java.awt.; //import java.awt.image.; import java.awt.event.; //import java.awt.image.renderable.; import javax.swing.; import javax.swing.border.; //import javax.swing.border.EtchedBorder; //import javax.media.jai.; //import javax.media.jai.operator.; //import com.sun.media.jai.codec.; //import java.lang.reflect.; how can i fix the problem in a linux debian machine?. Thanks

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • Reliable 1tb or larger hard drive?

    - by jasondavis
    I am in the market for 2-3 new drives, I would like each to be at least 1tb to 2tb in size. I have been reading all the reviews on newegg.com for 1tb and larger drives and they all have 1 thing in common. Almost all the ones I read about have complaints of them being DOA or dieing within a few weeks of use. I am hoping to find some drives with this storage range that have a reputation for lasting a long time instead of a short life. Please help me if you have any experience with these sort of drives? Most the ones I read about were Western Digital brand. I realize some might complain that this questions answer would be based upon a timeframe, so if a user searches and find this answer a year from now it will be outdated but I would appreciate any help based on the current hard drives available as of April 10th, 2010 on newegg.com

    Read the article

  • PowerShell create new Azure VM from uploaded disk (not image)

    - by MikeBaz
    I have a VHD in Azure storage. That VHD is configured as an OS disk through a command like the following: Add-AzureDisk -DiskName $newCode -MediaLocation "http://$script:accountName.blob.core.windows.net/$newCode/$sourceVhdName.vhd" ` -Label $newCode -OS "Windows" I would like to create a new VM pointing at that disk. From what I can tell if I was doing this with an image I would do something like: New-AzureVMConfig -Name $newCode -InstanceSize $instanceSize ` -MediaLocation "http://$script:accountName.blob.core.windows.net/$newCode/$sourceVhdName.vhd" -ImageName $newCode ` | Add-AzureProvisioningConfig -Windows -Password $adminPassword ` | New-AzureVM -ServiceName $newCode However this is wrong for me because I don't have an image - I have a configured VHD that is not sysprepped and can't be. How can I create the VM in PowerShell to point at the existing disk like I can through the portal?

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • How to Disable secondary drive from booting upon restart - Windows

    - by DevCompany
    I had a Windows 2003 Hard Drive on my server and it went bad so I installed a new clean hard drive and installed Windows 2008 R2 on the new clean drive. I moved the old 2003 drive to be used only for general storage on the same computer. It usually boots into Windows 2008 upon a restart, but just sometimes it starts trying to boot the old 2003 drive and causes boot issues(NTDLR Bootloader, and other errors), even though the order of boot preference is set to boot 2008, and NOT 2003. I need to know how to remove any old code that keeps this old drive as a bootable drive. I still want to use it as a secondary drive just dont want to have any boot code on it. hopefully my situation is clear for everyone to get a good response. Thank you...

    Read the article

  • Using Folder Redirection GPO and Offline Files and Folders

    - by user132844
    I want to use Folder Redirection to redirect user's My Documents to a network share. First question is: What is best practices for mapping the drive? Should I use the profile tab in AD with the %username% variable, or a net use logon script, or something else? Second question is: How do I deal with laptops and syncing the network with the local storage? I want to have 2-way syncing so if they manually map their networked home drive and edit it from a different computer, it will sync the newer version to their My Documents folder the next time they connect their normal work computer. I also want to be sure that if they edit a file offline on their laptop while away from the office, that the network version syncs the changes the next time they connect that laptop. Please advise best practices for this scenario in a 2008 R2/Win7 environment. I am also interested in Mac clients for this environment, and while I am very Mac savvy, I would like to hear what others consider to be best practices for Mac network homedirs in a Win environment.

    Read the article

  • how to debug java mail

    - by voipp
    My goal is to debug my programm, that uses java mail library(including javax.mail and com.sun.mail). So i decided first to download java mail sources and compile it with option -g. I go to the java mail sources and binaries , downloaded them. Somehow sources store in jar but not just zip. Ok. Then i decided to decompile jar into zip with JAD plugin in eclipse. After decompiling i receive empty directory. I downloaded jad.exe and run it , but it throwed a message : JavaClassFileParseException: Not a class file. It says it decompiles only classes, but what about jars? Is it so hard just store sources in fu** zip ???!!!!

    Read the article

  • RHEL raw device (over VMware RDM) performance issues

    - by jifa
    I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device. I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s. I created raw devices using udev pretty straightforwardly: ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" per device Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s. Can anyone point me in a helpful direction? Thanks in advance, -- jifa

    Read the article

  • Command-line access for Apple Time Machine?

    - by Stefan Lasiewski
    We use Apple's Time Machine to back up our workstations at the office. If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way. Is there a way to browse the Time Machine archive using the Mac's command-line? I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like: cp .snapshot/daily.0/filename.txt . or cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow Is there an equivalent for Apple's Time Machine?

    Read the article

  • jquery calendarpicker callback pass querystring

    - by user577318
    Trying to use this CalendarPicker source and docs here: http://bugsvoice.com/applications/bugsVoice/site/test/calendarPickerDemo.jsp I need to be able to pass the date selected as query string variable of "searchdate" and reload page also updating current date for calendarPicker with querystring date on page reload. This is what I have so far: jQuery(document).ready(function() { var calendarPicker = jQuery("#calendarpicker").calendarPicker({ monthNames:["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"], dayNames: ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"], years:0, months:6, days:5, showDayArrows:true, callback:function(cal) { // Simple output to test calendar date change jQuery("#output").html("Selected date: " + cal.currentDate.getFullYear()+"-"+cal.currentDate.getMonth()+"-"+cal.currentDate.getDate() ); // Not working well since it also includes arrows from datepicker as selectors jQuery(".calDay").children().click(function() { window.location.href="mysite.com?searchdate="+cal.currentDate.getFullYear()+"-"+cal.currentDate.getMonth()+"-"+cal.currentDate.getDate(); }); } }); Any help greatly appreciated. Can this be done with ajax? I am attempting to update a table of events by datepicker.

    Read the article

  • Dell Powervault MD3000 - Not sharing Files between servers

    - by Kevin
    I'm a developer who has to set up a Dell Powervault MD3000 due to lack of resources. I have connected the Powervault to 2 Dell 2950 servers via the SAS cables. I performed the setup using Dell's MD Storage Manager software (4 disks, RAID 5 with hot spare). Then I added the disks using Windows 2003 disk management (Basic, not dynamic disk and formatted with NTFS). When I add files to the array from one server, they are not visible on the other server (and vice-versa). Is the error in the windows disk management configuration?

    Read the article

  • Network adapters reliability

    - by casey_miller
    Can you help me with understanding of reliability of network adapters. Most of the time servers do have at least 2 NIC's bonded to provide sort of a HA for it. So in case of one NIC fails, the second would still do the job. I wonder which factors work when you use network adapters. I know that, the most important and weakest part of any computer system is: storage (i.e HDD). but how reliable actually network adapters are? There are more expensive ones, and cheaper adapters. In which cases do they actually fail? In what circumstances. May it be a intensive usage of them Time when it's on In your experience how often you found yourself changing NIC's due to their fail? Or just what's the typical lifetime of commodity NIC's? thanks.

    Read the article

  • User accounts in FTP

    - by Brad
    I have an FTP server(proftpd on debian) that I'm going to allow a couple friends access to, and I want some safety nets in place, just in case. These are some of the things I'd like to do: Jail the accounts to their home directories and impose a cap on the amount of data they can upload Allow them access to a shared folder(via symlink or something) where they have full access(Also with a storage cap, but larger) Allow my own account full access to the system(Using groups I guess) Not allow anonymous access, or allow it with its own folder, separate from the shared user folder Currently, I've got the accounts set up and jailed, but it seems like the symlink that I put in is not allowing them to visit the shared folder. I suppose this has to do with them not having read permissions anywhere but their own home directories, or maybe it's something else, I'll continue to look into it and provide any information that is requested. Is what I'm trying to do possible? Any tips or resources that you can share are appreciated. Thanks.

    Read the article

  • What are the cheap CDN for Origin Pull?

    - by DucDigital
    I've read several thread around ServerFault about this, but still I am not satisfy with the answer so I post a question here. I need a Origin Pull CDN that support big file (more than 200MB). I don't need a storage place since they are too small, just to relay the server. Also the price should be afforable, ofcourse not more than 150$ a month for their smallest plan. I also need to pay by credit card since I do not work or stays in the US so it's hard for me to do a bank wire. Thank you very much

    Read the article

  • java.awt.HeadlessException thrown from HeadlessGraphicsEnvironment.getDefaultScreenDevice

    - by Omry
    I need to do some image processing on a java server (Debian with java version "1.6.0_12"), and I am receiving java.awt.HeadlessException from my code: java.awt.HeadlessException at sun.java2d.HeadlessGraphicsEnvironment.getDefaultScreenDevice(HeadlessGraphicsEnvironment.java:64) at WaxOn.getDefaultConfiguration(WaxOn.java:341) Even when java.awt.headless is set to true (as evident by this code printing so): if (!java.awt.GraphicsEnvironment.isHeadless()) { logger.warn("Headless mode is not enabled"); } else { logger.info("Headless mode"); } This is the code that throws the exception: public static GraphicsConfiguration getDefaultConfiguration() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice gd = ge.getDefaultScreenDevice(); return gd.getDefaultConfiguration(); } Any idea how to solve this?

    Read the article

  • Local sites not displaying in VirtualBox when using Django's local development server?

    - by littlejim84
    Hello. I develop web applications using Django on Mac OSX 10.6. I use Django's built in local development server which I run on my computer's IP (such as: http://192.168.0.11:8001/). I test my applications in Firefox, Safari and Chrome and all display fine. I use Sun's VirtualBox with 3 different instances of Windows XP that have IE6, IE7 and IE8 on them. For whatever reason, these sometimes just don't display the Django sites. They come up with 'The page cannot be displayed'. Eight times out of ten, they display fine and function normally but for no reason at all they won't display. Sometimes restarting Django's local development server from the Terminal will fix the problem, sometimes it won't. Is there some sort of VirtualBox settings or Django settings that I need to set to ensure smooth operation of this? Am I overlooking something? Has anyone else had these problems?

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >