Search Results

Search found 5420 results on 217 pages for 'auxilliary storage'.

Page 135/217 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Does Juniper Networks provide keyloggers with their software?

    - by orokusaki
    I noticed that I had a "USB Mass Storage Device" plugged in when there wasn't in fact anything plugged in to any USB port. I turned it off via Windows (XP), but it's quite concerning. This was after installing Juniper Networks' software for VPN access to an IT guy's stuff. I also notice there is a service called "dsNcService.exe" which apparently is sending information over the internet (even when I'm not in VPN access). The process restarts itself when I end it. Should I be worried that this software is tracking my keystrokes and broadcasting them to my IT guy?

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • How to change default permission for uploaded files in apache with mounted webroot?

    - by faridv
    I have an ubuntu server 11.10 with apache 2.2.20, php 5.3.6 and an installation of Joomla cms. I have used an extra hard disk as my web server storage and mounted it into /data/www/ (I hope it's not where my problem us!). I've set permission of all files and folders in my web root to 755 and user groups for them is set to [default ubuntu user(in my case radio)]:www-data. In past days I had serious problems with joomla not showing new uploaded images and other files and also I can't install any extensions. After hours of searching I found out that uploaded files don't have appropriate permission (they are -rw-------) and Joomla application cannot read, copy or move them after upload. I’m wondering how can I set a default permission so all files that I upload use it? PS: I’ve tested umask but it did nothing. I think it has nothing to do with my problem.

    Read the article

  • Can i enlarge os c drive of my windows 8? [migrated]

    - by Sorgatz
    Last year I got a new Western Digital WD Blue 500GB HDD to replace my old drive. The first thing I did was to install latest Windows 8. While installing Windows 8 I created 3 partitions, C drive for the OS and others for storage. The OS partition is 120GB (which at the time I thought would be plenty big) but I'm now realizing its too small! I wonder if it's possible to re-size HDD partition without reformatting and re-install my Windows 8. So that is my question, Can i enlarge os c drive of my windows 8 without having to re-format? I've used the Norton Partition Magic and Disk Management to make this happen but there doesn't seem to be any options to make it happen. Thanks for any help you guys can give regarding my question. I've worked hard to optimize my current install of Windows 8 and would hate to start all over again.

    Read the article

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • Virtual fiber channel HBA in Solaris

    - by Phil
    We are trying to set up some virtual Fibre Channel HBA's in Solaris. This seems to be possible with NPIV. Creating NPIV's in a Solaris global zone works fine, but passing that NPIV to a zone didn't work at all. We tried to pass the NPIV as following: # zonecfg -z zone1 'info' zonename: studentz1 [...] device: match: /devices/pci@0,0/pci8086,25f9@6/pci8086,350c@0,3/pci1077,140@4/fp@1,0:devctl allow-partition not specified allow-raw-io not specified Wat we want to do is, set up an environment for SAN exercises. We don't have a physical host per student, so we try to virtualise that in some way (Solaris zones or VMware). It should be possible to display the WWN of the virtual HBA and mount the storage presented by the disk subsystem. Any ideas to pass the NPIV to a solaris zone or to virtualise this with vmware?

    Read the article

  • Windows 2008 DHCP service fails - "...failed to see a directory server for authorization."

    - by ewwhite
    I have a small environment running Windows 2008 R2 where the DHCP service on the domain controller fails every two weeks. The most-visible error is Event ID 1059 and the Event Viewer message is: "The DHCP service failed to see a directory server for authorization." The setup features two domain controller and the usual services and roles (file, print, Exchange). Restarting the service fails for a variety of reasons. I've had the following messages at different times: "Not enough storage is available to complete this operation". "Unable to determine the DHCP Server version for the Server 192.168.x.x" "The DHCP service has detected that it is running on a DC and has no credentials configured for use with Dynamic DNS registrations initiated by the DHCP service." A reboot of the domain controller resolves the issue for ~2 weeks. The systems are virtualized and there are no network connectivity issues. Any ideas what's happening here?

    Read the article

  • Advanced file compression software for Mac OSX

    - by Steven Roose
    Back when I used Windows, I always used WinRAR for file compression and decompression. It had a fair amount of options like 'just storage' vs 'hard compression', password protection and archive type. Now that I use Mac OSX, the only compression possibility I have is the default Finder's Compress to Zip. I downloaded the most popular decompression software "Unarchiver". But this app can't compress other archive types either. I went for a search but there seem to be hardly any good advanced compression tools that work nice in OSX and have the options WinRAR has. (WinRAR works in OSX but command line only, I'm looking for something with a GUI.) Any ideas? I strongly prefer freeware. I found Archiver and StiffIt, but they are both commercial.

    Read the article

  • How do I know if my SSD Drive supports TRIM?

    - by Omar Shahine
    Windows 7 has support for the TRIM command which should help ensure that the performance of an SSD drive remains good through it's life. How can you tell if a given SSD drive supports TRIM? See here for a description of TRIM. Also the following from a Microsoft presentation: Microsoft implementation of “Trim” feature is supported in Windows 7 NTFS will send down delete notification to the device supporting “trim” File system operations: Format, Delete, Truncate, Compression OS internal processes: e.g., Snapshot, Volume Manager Three optimization opportunities for the device Enhancing device wear leveling by eliminating merge operation for all deleted data blocks Making early garbage collection possible for fast write Keeping device’s unused storage area as much as possible; more room for device wear leveling.

    Read the article

  • Slight network lag on Small Business Server 2008 R2

    - by Sir.Nathan Stassen
    I recently upgraded a network from a SBS 2003 server to a SBS 2008 R2 Server. Both I and users have noticed a slight delay in network applications and browsing network drives on the new server. It is minor, maybe a second or two at most. However I am wondering if anyone knows of anyway to optimize the networking to service requests sooner to the workstations. The network is running a 1 gig network with some 100 meg devices (mainly network printers). All workstations are XP SP3 Network software runs out of a shared folder mapped as a network drive, no sql databases. Server is a Dell Poweredge T610 with plenty of ram, cpu power, and storage.

    Read the article

  • WD My Passport detected but not assigned a drive letter

    - by abel
    I have a WD My Passport external drive which used to work fine on my Win7 x64 box until recently. Since the past few days, windows would not detect my external drive. However, Storage under disk management lists my external drive(which means windows can see my drive) but without a drive letter. I am not able to assign a drive letter(the option is grayed out). I installed EASUS Partition Manager which allows me to assign a drive letter, but fails to do so when I apply to save changes. What do i do to assign a drive letter to the drive or make windows to detect it as of old? The external drive works well on Win7 x32(dual boot, same system) and win7 x64 on another box. Update: cant comment from Opera Mini 6 @zeke the drive works absolutely fine on other Win7 boxes and on the dual boot win7 on the same system.

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • Do you have any additions or alterations to this list of popular audio formats?

    - by roja
    All, I am trying to compile a list of common audio file formats used in both personal storage and peer transmission. I have compiled the following list, do you think that there are any significant formats missing? Are any of them not actually common formats? Any advice/alterations are highly useful. advanced audio coding, apple lossless audio file, atrac3 audio file, atrac audio file, audio interchange file format, core audio file, free lossless audio codec file, mpeg 1 audio layer 3, mpeg 2 audio, mpeg 4 audio book file, musical instrument digital interface, ogg vorbis compressed audio file, open media framework file, real audio, real audio media, waveform audio file format, windows media audio Kind regards, Roja

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • EC2 out of space on root disk, moving it to ephemeral

    - by Joseph Misiti
    I am spawning a few test servers on ec2 that happen to be m1.larges. I am using these test servers for load balancin testing. Anyways, most of the servers I have used before have been backed by EBS, but these instances (ubuntu 11.04) obviously come with a lot of ephemeral space located @ /mnt. What I noticed that is happening is I am running on space on the root disk. I am trying out this tutorial http://www.turnkeylinux.org/docs/using-instance-storage moving my /home + /usr directories to /mnt and then remounting them. This works except it does not survive a reboot. Am I missing something here or is this tutorial not completely correct. How do I make space on my / drive so I can do stuff and survive re-boots.

    Read the article

  • NFS or GFS for LVS 10 Server Setup

    - by Michael Robinson
    Currently we have a 10 servers LVS hosting setup. The people we hired to set it up did not anything about GFS which was our preferred Central Storage File System Solution. As we have tight time constraint, we just told them to use whatever they were familiar with which is NFS. I have since done some research and it seems that NFS is not ideal for the type of high traffic site we are hoping to build. I couldn't find much info online about the signaficance differences between the 2. As we to setup all servers again right now, should we stick with NFS or find someone who knows how to setup GFS amd go with that. We need a setup that is highly reliable and scalable as we intend. As after initial setup is done, we expect high increases in traffic and load.

    Read the article

  • Which way should we choose to shorten backup time?

    - by facebook-100005613813158
    A company performs a full backup for its data in a daily basis for disaster recovery purposes. However, their backup process cannot be completed within the assigned backup time window. What would you recommend to this company about how to restructure its backup environment in order to minimize the backup time? We got 4 candidates, 1. Perform LAN based backup 2. Weekly full backup and daily incremental 3. Weekly full backup and daily cumulative 4. Add more ISL to increase bandwidth when comparing incremental backup with cumulative backup ,incremental backup time is surely shorter than cumulative backup time .But I don's know adding more ISL is allowed in an existing storage system,or can this operation really shorten backup time ?

    Read the article

  • How do I usb tether my Cyanogen modded G1's internet connection to my Toshiba Tecra 8000 running Xub

    - by atticus
    I have usb-tethering enabled in my phone. It works fine with Vista. When I plug my phone into my Tecra 8000 laptop running Xubuntu, dmesg shows: "usb 1-1: new full speed USB device using uhci_hcd and address 8". I see that the OS has detected it as a storage device, but I can't get it to function correctly as a network device. /dev/us* shows no usb0, but it does show /dev/usbdev1.1_ep00 /dev/usbdev1.1_ep81 /dev/usbdev1.8_ep00 ... usbdev1.8_ep83.

    Read the article

  • Can unexpected power loss harm a Linux install?

    - by Johan Elmander
    I am developing an application on a Linux embedded board (runs Debian) e.g. Raspberry Pi, Beagle Board/Bone, or olimex. The boards works on an environment that the electricity is cut unexpectedly (it is far complicated to place PSU, etc.) and it would happen every day couple times. I wonder if the unexpected power cuts would cause crash/problem on the Linux Operation System? If it is something that I should worry, what would you suggest to prevent the damages on OS against the unexpected power cuts? PS. The application needs to writes some data to the storage medium (SD card), I think it would not be suitable to mount it as read-only.

    Read the article

  • please demystify the 10Gb ethernet interfaces, cables

    - by maruti
    this really is a Dell question but tempted to ask the experts @ serverfault. choosen a Dell powerconnect 8024 10GbE switch. per the spec sheet this has 10GbaseT ports. "24x 10GBASE-T (10Gb/1Gb/100Mb) with 4x Combo Ports of SFP+ (10Gb/1Gb) or 10GBASE-T" the HBA on my storage server has 10G CX4 copper ports Dell does not sell any cables and this adds to my confusion. from the picture Dell 8024 seems to have RJ-45 type ports on the front panel? my question: is it a RJ-45 + CX4 cable or CX4 + CX4 cable?

    Read the article

  • Rough estimate for speed advantage of SAN-via-fibre to san-via-iSCSI when using VMware vSphere

    - by Dirk Paessler
    We are in the process of setting up two virtualization servers (DELL R710, Dual Quadcore Xeon CPUs at 2.3 Ghz, 48 GB RAM) for VMware VSphere with storage on a SAN (DELL Powervault MD3000i, 10x 500 GB SAS drives, RAID 5) which will be attached via iSCSI on a Gbit Ethernet Switch (DELL Powerconnect 5424, they call it "iSCSI-optimized"). Can anyone give an estimate how much faster a fiber channel based solution would be (or better "feel")? I don't mean the nominal speed advantage, I mean how much faster will virtual machines effectively work? Are we talking twice the speed, five times, 10 times faster? Does it justify the price? PS: We are not talking about heavily used database servers or exchange servers. Most of the virtualized servers run below 3-5% average CPU load.

    Read the article

  • Can SATA be used to connect computers?

    - by André
    Can SATA be used to connect two computers together, just like a crossover Ethernet cable would do ? I know SATA has no "networking" features and even though a controller may have multiple ports, the drives don't "see" each other, and that in SATA one device acts as the host (the computer) and the other device is some kind of "client" (the storage drive). But still, did anyone attempt to make a kernel module that would make one computer appear as a "client" (so that the host's SATA controller detects it as a standard hard drive) and then set up like a pseudo-Ethernet link or a very high speed serial link (and then run pppd on it and do networking) ? Note : I know this is an unprofessional and totally stupid idea, I'm just asking out of curiosity.

    Read the article

  • Allow image upload - most efficient way?

    - by K-P
    Hey everyone, In my site, I currently only allow users to import images from other sites rather than uploading it themselves. The main reason for this is because I don't have much storage space on my host (relatively speaking). The host charges quite a bit for additional space. What are the alternatives to hosting images users upload (max 1mb size). Would it be a good idea to purchase separate cheap hosting with "unlimited space" (I know that's not true, but I'm guessing it's more than 1gb)? Or are there some caveats with this approach (e.g. security since the site should not be browsable, but accessed via another server)? Are there alternative ideas that I could employ? Thanks for any suggestions

    Read the article

  • Alternative to Windows Home Server (WHS) backups

    - by Adam Tegen
    Since Microsoft announced the end of life for WHS, are there any alternatives? Specifically, I am interested in recovering from a catastrophic disk failure with WHS. For example, this is my ideal scenario when a desktop hard-drive fails (has a bad virus, etc): Install a disk of the same size or greater Boot the desktop with the Recovery Disc Point the recovery application at the WHS Pick the machine, the drive(s) and the date of the backup Have a couple beers Reboot to a working machine as if nothing happened. I would need to slap multiple disks in the machine without raid. It sounds like LVM will work here. It would be nice, but not required to have de-duplication of files when multiple machines are backed up. (Single Instance Storage)

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >