Search Results

Search found 5420 results on 217 pages for 'auxilliary storage'.

Page 181/217 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • Hosting a server for websites, ftp and random use at home?

    - by Zolomon
    I'm wondering what's the best option for me if I want to move all my hosted websites (from a hosting company) to a server at my own home? Basically, the needs I have are: be able to host websites using PHP/ASP.NET (haven't really decided yet - both would be preferred!) enable FTP so I can create accounts for my family members to access the server for file handling SSH SSL - for secure connections (this is something you have to buy/apply for per domain, not sure if there are any server side settings that have to be made) be able to stream video remote desktop host home-brew applications that can run as services use either MySQl/SQLite/SQL for relational database storage What should I think of before I buy a server? What hardware will I need, what will limit my server? I basically want to learn networking better as I'm a software and web developer but haven't had the resources to acquire any serious toys until now. At the time of writing, most of my websites have 60 visits/day so I don't suspect them to be very demanding. Is there something I haven't thought of that I should have? What OS would you suggest I run? FreeBSD vs Windows Server vs ?

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Cloning a NAS drive which hosts a SQL Server DB

    - by Adrian Hand
    We have a system in the field running a server application which is suffering with major performance issues. The system in question has 2 onboard 300gb sas drives in RAID 5 from which it boots Windows Server 2003, and a 6tb buffalo terastation NAS unit (also RAID 5) to which the server app does all of its reading and writing. I believe the terastation is the source of all our woes. Whilst under load, reads and writes tick by at something of the order of 1meg/sec, though the network in question is hardly utilised. The terastation contains various data, but crucially hosts a full instance worth of SQL Server .mdf and .ldf files (master etc - the whole shooting match) I wish to stop all the services on the server, then take everything on the terastation and essentially clone it to some alternative onboard storage, so as to eliminate the terastation from the equation as far as poor performance is concerned. ie the terastation is currently drive D: - I want to copy everything off and then have the duplicate assume the drive letter so that as far as the software is aware, nothing is different. This is tricky because of the mdf and ldf files - everything else will work with a straight up file copy. Can anyone suggest a means to achieve what I am describing? Many thanks!

    Read the article

  • EMC VNX iSCSI setup - unsure about SP/port assignment

    - by pauska
    We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI. From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them? Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports? It also gives the following example: Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)? Edit: Another brainpuzzle I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription? Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • Calendar booking issue - Exchange 2003 and 2010

    - by NaOH
    In our organization we are running Exchange 2003 and 2010 simultaneously, with the hopes of migrating everyone to Exchange 2010 sometime within the next few months. Everyone is using Outlook 2010. Recently, we had an issue with transaction log storage on the Exchange 2003 server. This was resolved, but for some reason no meeting rooms on the Exchange 2003 server will automatically book meetings any longer. I have played around with this for a while, changing calendar permissions, turning resource scheduling off and back on, etc. No dice. My next step was to try migrating a resource to the Exchange 2010 server. After doing so, and setting it up as a Room, enabling Auto-Accept and removing the EnableDirectBooking registry entry on my PC, I can book a meeting with this room. If EnableDirectBooking is enabled, I get an error message stating: "Meeting Room" declined your meeting because it is recurring. You must book each meeting separately with this resource. This is despite the fact that the meeting I'm attempting to create has no recurrence. Now, I have also created a new test Room from scratch on the Exchange 2010 server, and I can book a meeting with this Room regardless of whether or not I have the EnableDirectBooking reg entry in place. All users here have this registry entry, and I'd rather not have to figure out how to push something out to remove it from every PC. Rather, I'd like to figure out what's different between the configurations of these two meeting rooms so that I could book a meeting room regardless of whether EnableDirectBooking is enabled or not. Any ideas, anyone? Thanks!

    Read the article

  • Thecus N5200, disk has dropped out of RAID5

    - by Anders Ekdahl
    We have a Thecus 5200 NAS here at work with five WD Caviar Black 2TB disks in a RADI5 array. Yesterday, disk 4 dropped out of the array, and in the NAS web interface there's a warning about the RAID array being "degraded". When I go into Storage - Disks, disk 1 and 4 has a warning next to them. When I click on the warnings, this information about the disks are displayed: Tray Number 4 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 34 Reallocated Sector Count 66 Current Pending Sector 1447 Raw Read Error Rate 61 Seek Error Rate 0 Hardware ECC Recovered N/A Tray Number 1 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 32 Reallocated Sector Count 0 Current Pending Sector 1465 Raw Read Error Rate 0 Seek Error Rate 0 Hardware ECC Recovered N/A I'm not really an expert on either disks or RAID arrays. Does this indicate that the fourth disk is damaged, and needs to be replaced? And what about disk number one? It has a warning, but it's still in the array. Is it safe to add the fourth disk back into the array as a spare? I can't find any way to add it back as a it were before.

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Where is my problem? The P6X58D Premium Mobo, Windows 7, or other?

    - by Dylan Yaga
    I was having problems with my USB devices for an hour last night, and I am unable to determine the root cause of the problem. The two symptoms are: At seemingly random times (not consistently spaced by time or caused by any detectable event) my USB devices become "detached". Windows will play the USB disconnect sound and then the reconnect sound. The devices disconnected and then reconnected. My USB Keyboard will "stick" on one key for several seconds before processing any other keystroke made. The mouse also does not respond to clicks. I do not lose mouse movement or USB device connectivity. And after a moment of this several beeps will be emitted from the speakers. Hardware Specs: GFX Card: EVGA GeForce GTX 470 Superclocked 1280MB DDR5 PCIe Motherboard: ASUS P6X58D Premium Intel X58 Socket LGA1366 MB Processor: Intel Core i7-920 2.66Ghz 8M LGA1366 CPU Memory: Corsair Dominator 6144MB PC12800 DDR3 Storage: Hitachi 1TB Serial ATA HD 1600MHz 7200/32MB/SATA-3G Cooling: Corsair Hydro H50 CPU Liquid Cooler Case: Corsair Obsidian 800D Full Tower Case Power Supply: Corsair HX1000W 1000W Modular Power Supply Steps I have taken to narrow down the problem: Restarted the computer. - No change Changed USB port the Hub was connected to on the CPU. - No change Removed all devices from USB Hub and connected directly to CPU. - No change Used a different USB keyboard both in USB Hub and directly to CPU. - No change Disconnected and reconnected all cables. - No change Disassembled the Tower and determined if the USB headers were firmly connected. - No change Checked device manager for errors. Checked all USB devices. - Nothing flagged After an hour of frustration trying to narrow down the problem it appeared to disappear. But I am torn between it being a Mobo problem or an OS problem. Is there anything else I can do to narrow down the problem before a reformat and then eventually exchanging the Mobo?

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • External SATA drive does not work without the optional USB cable *also* connected

    - by Software Monkey
    I have Vantec NST-260SU external eSATA/USB drive enclosure (which came with an optional separate power supply) connected to a relatively new Windows 7 computer. The drive should work as a SATA drive with either the separate power supply or using a USB cable solely for power. I would prefer to use the external power supply because I have used all my rear USB ports. Now, if I connect both the eSATA and USB cable, then: The drive shows in the BIOS list of AHCI drives (and not in the list of attached USB devices). Everything I can see about it in Computer Management seems to show it as a SATA driver (for example, it shows as "Location 0 (Channel 5, Target 0, Lun 0)" like my other SATA drives (and not "on USB Mass Storage Device" like my USB flash-drives). It seems very fast, very much faster than my USB flash drives. However, if I disconnect the USB cable and attach the power adapter instead, the drive does not show in the BIOS list and cannot be seen by Windows. The power LED on the enclosure is lit, and the drive enclosure becomes warm after running for a bit, so I am sure it is receiving power. Does anyone know if this device requires both the USB and eSATA cable, and if so, why? Or is there possibly something I need to do to reset the enclosure to not need the USB - the install instructions are pretty clear that you must connect the SATA cable before connecting the USB cable in order for the drive to function as SATA, which I am sure I did. PS: I have reviewed the small manual which came with it, which has not been of help.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >