Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 372/465 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Access Denied on Some Subfolders/Files Within a Share

    - by Tim
    First thing this morning, I find that users on one of our share drives are all getting "access denied". I tried the same drive and also received "access denied" as a Domain Admin. Previous to this, all specified users and admins could get access. I checked share permissions I checked NTFS permissions I temporarily made both types of permissions read/write to "Everyone" -- This worked for one user It turns out that this is occurring for only some files/folders. When I try to manually alter the share of that single share, it can't be shared, access denied. xcacls also gets access denied rebooted the server (not a big deal - this is a smallish company). Does anybody have any insight, my google-fu is coming up blank. Thanks. EDIT: More info, I just ran AccessEnum. There were a lot of "access denied", but I noticed the pattern that all of the access denied had a parent with an owner of "???". When I look at the properties, the "Unable to display owner" message is in the box and I can only make my user account the owner. I can then share the individual file/folder, but it doesn't seem to propogate down to subfolders/files.

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • ffmpeg conversion problem

    - by Elamurugan
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • will heavy network traffic affect other connections on HP ProCurve V1810-48G?

    - by nn4l
    I have a HP ProCurve V1810-48G switch with a few servers connected to it (everything in one rack). The switch is practically in its default configuration. During copying of a few hundred GByte of data from server_a to server_b (using tar cf - data | ssh server_b 'cd myhome; tar xf -'), essentially saturating the network capacity between those two servers, I noticed network related error messages on the console of server_c - as if server_c is no longer able to send/receive traffic to server_d. After canceling the copy command everything was normal again. I would understand this if the network connection would use a shared resource, for example if server_a and server_c are in one datacenter, server_b and server_d are in another datacenter and both datacenters are connected with a 100 MBit line. But all of the mentioned servers are connected to the same switch and are located in the same IP network. I always thought that a connection between two servers on one switch will not affect any other server connected to the switch. It is also possible that the network related error messages are caused by something else - but I can't risk a network problem for any other system on this switch. Please advise.

    Read the article

  • What ways are there to set permissions on an Exchange 2003 mailbox?

    - by HopelessN00b
    I'm having a difficult/impossible time tracing down a permissions issue on an Exchange 2003 mailbox, and I was wondering if I'm missing any technical possibilities here. The basic question is what ways are there to set a user's permissions to access a mailbox in Exchange 2003? I know of two. Permissions on the mailbox itself (Mailbox Rights) and having delegated rights. And then, if it's possible, how would one view all the permissions (including delegated permissions) on the mailbox? The situation is that a new user who's been set up "exactly like all the others" in his department (pretty sure he was copied via the right click option in ADUC, in fact) can't access a specific shared mailbox, which I've been assured about a dozen other people do have access to and access on a regular basis. As to how they got permissions to the mailbox, no one knows, so it must have been granted by a white wizard whose spell has since worn off, so now IT has to handle it instead. Anyway... This mailbox is a normal AD user, created as a service account, for which no one knows the password (of course), so it's probably not the case that this service account was being used to delegate permissions. Upon taking examining the Mailbox Rights directly... Here are the permissions I see: This leads me to believe that one of two things are happening - the managers have been delegating full mailbox permissions to the rest of the department, or everyone's logging in using... not their own account. But, before I get too excited about the prospect of busting out the LART and strolling over to that department, I want to make sure I'm not missing another possible explanation. Like most of the rest of the world, I ditched Exchange 2003 at the earliest possible opportunity, and had been looking forward to never seeing it again, so I'm a bit rusty on the intricacies of how it [mostly, sort of] works. Anyone see any or possibilities, or things I may have missed, or does the LART get to come out and play?

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • EMC VNX iSCSI setup - unsure about SP/port assignment

    - by pauska
    We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI. From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them? Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports? It also gives the following example: Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)? Edit: Another brainpuzzle I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription? Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • Where is my problem? The P6X58D Premium Mobo, Windows 7, or other?

    - by Dylan Yaga
    I was having problems with my USB devices for an hour last night, and I am unable to determine the root cause of the problem. The two symptoms are: At seemingly random times (not consistently spaced by time or caused by any detectable event) my USB devices become "detached". Windows will play the USB disconnect sound and then the reconnect sound. The devices disconnected and then reconnected. My USB Keyboard will "stick" on one key for several seconds before processing any other keystroke made. The mouse also does not respond to clicks. I do not lose mouse movement or USB device connectivity. And after a moment of this several beeps will be emitted from the speakers. Hardware Specs: GFX Card: EVGA GeForce GTX 470 Superclocked 1280MB DDR5 PCIe Motherboard: ASUS P6X58D Premium Intel X58 Socket LGA1366 MB Processor: Intel Core i7-920 2.66Ghz 8M LGA1366 CPU Memory: Corsair Dominator 6144MB PC12800 DDR3 Storage: Hitachi 1TB Serial ATA HD 1600MHz 7200/32MB/SATA-3G Cooling: Corsair Hydro H50 CPU Liquid Cooler Case: Corsair Obsidian 800D Full Tower Case Power Supply: Corsair HX1000W 1000W Modular Power Supply Steps I have taken to narrow down the problem: Restarted the computer. - No change Changed USB port the Hub was connected to on the CPU. - No change Removed all devices from USB Hub and connected directly to CPU. - No change Used a different USB keyboard both in USB Hub and directly to CPU. - No change Disconnected and reconnected all cables. - No change Disassembled the Tower and determined if the USB headers were firmly connected. - No change Checked device manager for errors. Checked all USB devices. - Nothing flagged After an hour of frustration trying to narrow down the problem it appeared to disappear. But I am torn between it being a Mobo problem or an OS problem. Is there anything else I can do to narrow down the problem before a reformat and then eventually exchanging the Mobo?

    Read the article

  • Thecus N5200, disk has dropped out of RAID5

    - by Anders Ekdahl
    We have a Thecus 5200 NAS here at work with five WD Caviar Black 2TB disks in a RADI5 array. Yesterday, disk 4 dropped out of the array, and in the NAS web interface there's a warning about the RAID array being "degraded". When I go into Storage - Disks, disk 1 and 4 has a warning next to them. When I click on the warnings, this information about the disks are displayed: Tray Number 4 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 34 Reallocated Sector Count 66 Current Pending Sector 1447 Raw Read Error Rate 61 Seek Error Rate 0 Hardware ECC Recovered N/A Tray Number 1 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 32 Reallocated Sector Count 0 Current Pending Sector 1465 Raw Read Error Rate 0 Seek Error Rate 0 Hardware ECC Recovered N/A I'm not really an expert on either disks or RAID arrays. Does this indicate that the fourth disk is damaged, and needs to be replaced? And what about disk number one? It has a warning, but it's still in the array. Is it safe to add the fourth disk back into the array as a spare? I can't find any way to add it back as a it were before.

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >