Search Results

Search found 1650 results on 66 pages for 'sas san'.

Page 58/66 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • R plot- SGAM plot counts vs. time - how do I get dates on the x-axis?

    - by Nate
    I'd like to plot this vs. time, with the actual dates (years actually, 1997,1998...2010). The dates are in a raw format, ala SAS, days since 1960 (hence as.date conversion). If I convert the dates using as.date to variable x, and do the GAM plot, I get an error. It works fine with the raw day numbers. But I want the plot to display the years (data are not equally spaced). structure(list(site = c(928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L, 928L), date = c(13493L, 13534L, 13566L, 13611L, 13723L, 13752L, 13804L, 13837L, 13927L, 14028L, 14082L, 14122L, 14150L, 14182L, 14199L, 16198L, 16279L, 16607L, 16945L, 17545L, 17650L, 17743L, 17868L, 17941L, 18017L, 18092L), y = c(7L, 7L, 17L, 18L, 17L, 17L, 10L, 3L, 17L, 24L, 11L, 5L, 5L, 3L, 5L, 14L, 2L, 9L, 9L, 4L, 7L, 6L, 1L, 0L, 5L, 0L)), .Names = c("site", "date", "y"), class = "data.frame", row.names = c(NA, -26L)) sgam1 <- gam(sites$y ~ s(sites$date)) sgam <- predict(sgam1, se=TRUE) plot(sites$date,sites$y,xaxt="n", xlab='Time', ylab='Counts') x<-as.Date(sites$date, origin="1960-01-01") axis(1, at=1:26,labels=x) lines(sites$date,sgam$fit, lty = 1) lines(sites$date,sgam$fit + 1.96* sgam$se, lty = 2) lines(sites$date,sgam$fit - 1.96* sgam$se, lty = 2) ggplot2 has a solution (it doesn't mind the as.date thing) but it gives me other problems...

    Read the article

  • How can I track down the cause of ext3 filesystem corruption?

    - by Jon Buys
    We have a VMware vSphere 5 environment running CentOS 5.8 virtual machines. In the past two weeks we have had five incidents of virtual machines having a filesytem become corrupt, requiring an fsck to repair. Here is what we see in the logs: Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392098: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:39:28 hostname kernel: Aborting journal on device dm-2. Nov 14 14:39:28 hostname kernel: __journal_remove_journal_head: freeing b_committed_data Nov 14 14:39:28 hostname last message repeated 4 times Nov 14 14:39:28 hostname kernel: ext3_abort called. Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Nov 14 14:39:28 hostname kernel: Remounting filesystem read-only Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392099: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:31:17 hostname ntpd[3041]: synchronized to 194.238.48.2, stratum 2 Nov 14 15:00:40 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2162743: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 15:13:17 hostname kernel: __journal_remove_journal_head: freeing b_committed_data The problem seems to happen while we are rsync'ing application data from another server. So far we have been unable to reproduce the problem, or identify a root cause. After we had a few servers have this problem, we assumed that there was an issue with the template, so we scrapped all VM's cloned off of the template, destroyed the template, and built a new template from scratch, installed from a newly downloaded CentOS ISO. We use HP EVA SAN's for datastores, and moved from a 4400 to a 6300 after the first problem. Since the move and rebuilding new virtual machines we have seen the issue twice. On one VM we shut down the server, removed two virtual CPUs, and booted it back up again, the problem presented itself almost immediately. On the other VM, we rebooted it, and the problem happened a half hour later. Any tips or pointers in the right direction would be appreciated.

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • _Painless_ integration of Eclipse with Vim?

    - by Adnan
    Hi, Has anyone managed to get Vim integrated into Eclipse painlessly? I just want to use Vim for the editor while retaining the general Eclipse interface. I have tried using Eclim plugin but the editor seemed to crash more often than work (the site said that the editor replacement functionality is still beta). On the flip side, is there any IDE which matches Eclipse's functionality -- mainly the integration with SVN, ant, etc. -- and is also able to use Vim? I mostly use eclipse for SAS SCL, Java and Javascript programming and find the eclipse editor too "mouse-y". I'd also like, in a perfect world, to use vimdiff as a diff viewer for SVN (we use TortoiseSVN) while checking for diffs or conflicts during merge etc. I admit I havent spent a lot of time trying to get these things to work. I feel guilty about spending too much time on potential wild-goose-chases while my other team members are working away at their code, perfectly content with all that eclipse has to offer. Edit: Just found this while desperately browsing around: Vim plugin.. Any experience using this? Its saturday afternoon in Kiwi land, so I'll try it later and update if no one has an opinion. From the claims on the site, it sounds perfect.

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • PC shut downs automatically after 10-20 second. No POST screen, no beeps

    - by emzero
    I have this not-so-old computer that's not being used for a year or so. Specs: Motherboard: ASUS PN5-E SLI CPU: Intel Core2Duo E4300 RAM:2x2GB SuperTalent DDR2-800 VGA: Zogis GeForce 7950GT PSU: Vitsuba San-55-S 550w HD: No hardrives yet When I power on the computer, everything seem to start, but right away the whole system shuts down. I've removed and changed the RAM sticks, take out the VGA, everything I could think of. So what could it be causing this? The PSU? The motherboard is dead? The CPU? Any help to isolate the problem will be useful. Thanks PS: Please don't close the question, this could be helpful to anybody having a similar problem, even with different hardware. UPDATE I've removed the old thermal paste and apply a brand new one. I also cleaned every dust using a high pressure gas dust remover. Checked for bad capacitors, all of them seem ok. Opened the PSU, removed big giant dust balls, cleaned with high pressure dust remover. Still the same problem, but now it stays powered on for almost 20 seconds maybe. But no POST screen, no beeps at all, nothing. So I suspect it's a motherboard or PSU failure. Unfortunately I don't have an energy tester to test the PSU... Don't know what else to try. I don't have another 775-motherboard to test the CPU, RAM and VGA with it.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • How does real world login process happen in web application in Java?

    - by Nitesh Panchal
    Hello, I am very much confused regarding login process that happen in Java web application. I read many tutorials regarding jdbcRealm and JAAS. But, one thing that i don't understand is that why should i use them ? Can't i simply check directly against my database of users? and once they successfully login to the site, i store some variable in session as a flag. And probably check that session variable on all restricted pages (I mean keep a filter for restricted resources url pattern).If the flag doesn't exist simply redirect the user to login page. Is this approach correct?Does this approch sound correct? If yes, then why did all this JAAS and jdbcRealm came into existence? Secondly, I am trying to completely implement SAS(Software as service) in my web application, meaning everything is done through web services.If i use webservices, is it possible to use jdbcRealm? If not, then is it possible to use JAAS? If yes, then please show me some example which uses mySql as a database and then authenticates and authorizes. I even heard about Spring Security. But, i am confused about that too in the sense that how do i use webservice with Spring Security. Please help me. I am really very confused. I read sun's tutorials but they only keep talking about theories. For programmers to understand a simple concept, they show a 100 page theory first before they finally come to one example.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Mac Mavericks, ngircd localhost works, private IP doesn't

    - by user221945
    I have configured ngircd to listen on my private ip address. It doesn't. Localhost works fine. Configuration test: ngIRCd 21-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+TCPWRAP+ZLIB-x86_64/apple/darwin13.2.0 Copyright (c)2001-2013 Alexander Barton () and Contributors. Homepage: http://ngircd.barton.de/ This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Reading configuration from "/opt/local/etc/ngircd.conf" ... OK, press enter to see a dump of your server configuration ... [GLOBAL] Name = irc.bellbookandpistol.com AdminInfo1 = Jaedreth AdminInfo2 = San Diego County CA, US AdminEMail = [email protected] HelpFile = /opt/local/share/doc/ngircd/Commands.txt Info = Server Info Text Listen = 10.0.1.5,127.0.0.1 MotdFile = MotdPhrase = "Welcome to irc.bellbookandpistol.com" Password = PidFile = Ports = 6667 ServerGID = wheel ServerUID = root [LIMITS] ConnectRetry = 60 IdleTimeout = 0 MaxConnections = 0 MaxConnectionsIP = 6 MaxJoins = -1 MaxNickLength = 9 MaxListSize = 0 PingTimeout = 120 PongTimeout = 20 [OPTIONS] AllowedChannelTypes = #&+ AllowRemoteOper = no ChrootDir = CloakHost = CloakHostModeX = CloakHostSalt = kBih5mu\kVI!DC6eifT(hd4m/0'zb/=: CloakUserToNick = no ConnectIPv4 = yes ConnectIPv6 = no DefaultUserModes = DNS = yes IncludeDir = /opt/local/etc/ngircd.conf.d MorePrivacy = no NoticeAuth = no OperCanUseMode = no OperChanPAutoOp = yes OperServerMode = no RequireAuthPing = no ScrubCTCP = no SyslogFacility = local5 WebircPassword = [SSL] CertFile = CipherList = HIGH:!aNULL:@STRENGTH DHFile = KeyFile = KeyFilePassword = Ports = [OPERATOR] Name = [REDACTED] Password = [REDACTED] Mask = [CHANNEL] Name = #BBP Modes = tnk Key = MaxUsers = 0 Topic = Welcome to the Bell, Book and Pistol IRC Server! KeyFile = As you can see, it should be listening on 10.0.1.5, but it isn't. After turning on Apache manually, port 80 works on 10.0.1.5, but port 6667 doesn't. It only works on localhost. Is there some terminal command I could use or some config file I could edit to get this to work?

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • SQL Server performance on VSphere 4.0

    - by Charles
    We are having a performance issue that we cannot explain with our VMWare environment and I am hoping someone here may be able to help. We have a web application that uses a databases backend. We have an SQL 2005 Cluster setup on Windows 2003 R2 between a physical node and a virtual node. Both physical servers are identical 2950's with 2x Xeaon x5460 Quad Core CPUs and 64GB of memory, 16GB allocated to the OS. We are utilizing an iSCSI San for all cluster disks. The problem is this, when utilizing the application under a repeated stress testing that adds CPUs to the cluster nodes, the Physical node scales from 1 pCPU to 8 pCPUs, meaning we see continued performance increases. When testing the node running Vsphere, we have the expected 12% performance hit for being virtual but we still scale from 1 vCPU to 4 vCPUs like the physical but beyond this performance drops off, by the time we get to 8 vCPUs we are seeing performance numbers worse than at 4 vCPUs. Again, both nodes are configured identically in terms of hardware, Guest OS, SQL Configurations etc and there is no traffic other than the testing on the system. There are no other VMs on the virtual server so there should be no competition for resources. We have contacted VMWare for help but they have not really been any suggesting things like setting SQL Processor Affinity which, while being helpful would have the same net effect on each box and should not change our results in the least. We have looked at all of VMWare's SQL Tuning guides with regards to VSphere with no benefit, please help!

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

  • Installing Ubuntu guest crashes Hyper-V host

    - by Grant
    I have a weird problem that I don't even know where to begin diagnosing. Trying to install Ubuntu to a VM locks up the host system! My setup is: Dell R715 server, dual 16 core AMD opteron processors, 96GB RAM Dell MD3600f SAN Server 2008 R2 Datacenter System Center VMM 2012 There are 5 windows virtual machines running that have had no problems. This is the first linux VM I've tried to create. I setup a VM through virtual machine manager, set the CD drive to a Ubuntu 12.04 server x64 iso, and started it up. It boots up the normal ubuntu install menu, but the second I hit enter on "Install Ubuntu Server", I get disconnected. The HOST machine stops responding to pings. So do all virtual machines on it. It locks up entirely - keyboard on the host won't work, mouse won't move, numlock light won't change. There's no blue screen - the host is sitting at the login screen completely unresponsive. I can't find any relevant logs in event viewer after rebooting. What could cause the host machine to freeze like that? It's not a one time occurrence - it happens every time at the exact same point. Thank god this server isn't in production yet!

    Read the article

  • overwrite parameters passed by querystring

    - by opensas
    I have the following problem I have a web framework built with classic asp that saves the page state in hidden textboxes, and then issues a submit to itself. Before submitting, we have a javascript functions that saves the action in a hidden "action" input, and then performs the submit. The page loads the state from those hidden texts, reads the action issued, reads extra parameters, like the id of the record to edit, and then builds the page accordingly. I'd like to make a url link to automatically start the page with "edit" action on a "x" id. So I was thinking about building the following url, for example http://myapp/user?action=edit&id=23 the problem is that when the page auto-submits, que url string keeps the parameters. I'd like to achieve the following: when the user clicks on http://myapp/user?action=edit&id=23 my page should receive the posted values action=edit and id=23 but the url should be just http://myapp/user and both parameters should be kept in the hidden texts... (I wonder if I make myself clear...) thanks a lot saludos sas ps: I have a couple of ideas about how to solve it, but I'll post them as answers...

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >