Search Results

Search found 10482 results on 420 pages for 'virtual lab'.

Page 356/420 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • HELP PLEASE!!!! External component has thrown an exception. ASP.NET ASPX PAGE POST

    - by Brandon
    I have an aspx page that communicates with a webservice I have. It connects to an SQL Server database on my virtual dedicated server. With just a little usage, I get this error External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.btn_submit_Click(Object sender, EventArgs e) +218 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +105 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +107 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1746

    Read the article

  • When I try to access a website without www I get access denied.

    - by madphp
    I have an apache web server on a debian machine. I'm using virtualmin to administer virtual hosts. I have two sites on this server right now, when I try to access one site without the www in the URL I get an access denied. The other site is fine. The site with the problem is a cakephp app and has the following .htaccess file in the public_html folder. <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> Below is the directives for the problem domain. SuexecUserGroup "#1001" "#1001" ServerName mydomain.net ServerAlias www.mydomain.net ServerAlias webmail.mydomain.net ServerAlias admin.mydomain.net DocumentRoot /home/mydomain/public_html ErrorLog /var/log/virtualmin/mydomain.net_error_log CustomLog /var/log/virtualmin/mydomain.net_access_log combined ScriptAlias /cgi-bin/ /home/mydomain/cgi-bin/ ScriptAlias /awstats/ /home/mydomain/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/mydomain/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/mydomain/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.mydomain.net RewriteRule ^(.*) https://mydomain.net:20000/ [R] RewriteCond %{HTTP_HOST} =admin.mydomain.net RewriteRule ^(.*) https://mydomain.net:10000/ [R] RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 31 <Files awstats.pl> AuthName "mydomain.net statistics" AuthType Basic AuthUserFile /home/mydomain/.awstats-htpasswd require valid-user </Files>

    Read the article

  • Computer always freezing after random periods of time. No errors listed.

    - by Wesley
    Hi, all. Here are my specs beforehand: AMD Athlon XP 2400+ @ 2.00 GHz, 160GB IDE HDD, 128MB GeForce 6200 AGP, 2 x 512MB PC3200 DDR RAM, CoolMax 350W PSU, 1 x CD-RW Drive, 1 x DVD-ROM Drive, FIC AM37 Mobo, Windows XP Pro SP3 My desktop freezes after random periods of time but there are no errors listed in the Event Viewer after a forced shutdown and restart. A couple months ago, I found that when it froze, the floppy was being accessed at the same time. So, I disconnected the floppy (since I never used it anyways) from the power supply and motherboard. Everything was working fine and the computer never froze. This past Christmas break, I left for a Conference and when I got home, the computer kept freezing again. So, this time, I just disconnected the DVD reader (from mobo and power) and started it up. Still, it froze almost right away. Then I found some older sticks of RAM (2 x 256MB PC2100 DDR) and swapped them in. Everything worked fine again after that. I even swapped the 2 x 512MB PC3200 DDR RAM back in and everything worked okay. Then it started freezing again, and I tried all possible RAM combos, still freezing within 5-10 minutes of startup. One thing I've realized is that the floppy drive is still listed in My Computer and I uninstalled it from my Device Manager already. There were no software errors, and I uninstalled the most recent software, with no effect. Still, I have no idea what is wrong because everything ran fine before that. Any suggestions? EDIT: Still have yet to buy some blank CD media to use Memtest86+. However, would a lack of virtual memory cause the computer to freeze? EDIT2: So after a long time, I ran Memtest86+ and all is well. Turns out though, after removing my original DVD reader and replacing it with a DVD-ROM drive, there is no freezing whatsoever! Thanks for all your suggestions!

    Read the article

  • Destination NAT Onto the Same Network from internal clients

    - by mivi
    I have a DSL router which acts as NAT (SNAT & DNAT). I have setup a server on internal network (10.0.0.2 at port 43201). DSL router was configured to "port forward" (or DNAT) all incoming connections to 10.0.0.2:43201. I created a virtual server for port forwarding on DSL router. I also added following iptables rules for port forwarding. iptables -t nat -A PREROUTING -p tcp -i ppp_0_1_32_1 --dport 43201 -j DNAT --to-destination 10.0.0.2:43201 iptables -I FORWARD 1 -p tcp -m state --state NEW,ESTABLISHED,RELATED -d 10.0.0.2 --dport 43201 -j ACCEPT # ppp_0_1_32_1 is routers external interface. # routers internal IP address is 10.0.0.1 and server is setup at 10.0.0.2:43201 Problem is that connections coming from external IP addresses are able to access internal server using External IP address, but internal clients (under NAT) are not able to access server using external IP address. Example: http://<external_address>:43201 is working from external clients But, internal clients are not able to access using http://<external_address>:43201 This seems to be similar to the problem described in http://www.netfilter.org/documentation/HOWTO/NAT-HOWTO-10.html (NAT HOW-TO Destination NAT Onto the Same Network). Firstly, I am not able to understand why is this a problem for internal clients? Secondly, what iptables rule will enable internal clients to access server using external IP address? Please suggest.

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Trying to limit IMAP folders/mailboxes my iPhone/iPad sees

    - by QuantumMechanic
    (Note: I am using dovecot 1.0.10 on Ubuntu 8.04.4 LTS. Yes, I know I need to upgrade before next year :) (Note: The SMTP/IMAP server in question only serves my family, so there's only a very few users. Certainly what I propose below, even it it works, would be a logistical nightmare with any significant number of users). I have noticed (and have confirmed via google) that the iOS mail app is terrible in its handling of IMAP subscriptions, namespaces, etc. For example, my iPhone and iPad will see EVERYTHING (all mailboxes, folders, etc.), whereas clients like Thunderbird, alpine, etc. only see what I tell them to see. This makes it an incredible pain to move mail between mailboxes because I have to scroll through a gazillion things. The mail_location in dovecot.conf is: mail_location = mbox:%h/Mail/:INBOX=/var/mail/%u To get around this, I've been considering doing the following for user foo: Create a dovecot userdb with a foo-ios virtual user in it, whose UID is identical to that of the real (in /etc/passwd) foo user and with a homedir of /home/foo-ios. ln -s /var/mail/foo /var/mail/foo-ios mkdir -p /home/foo-ios/Mail cd /home/foo-ios/Mail ln -s /home/foo/Mail/mailbox-i-want-visible mailbox-i-want-visible Make symlinks for the rest of limited set of mailboxes/folders I want visible to the iOS mail app. chown -R foo:foo /home/foo-ios Change iOS mail app settings to log in as user foo-ios instead of user foo. Will this work or will there be some index/file corruption hell because there will be two sets of indexes (one set living in /home/foo/Mail/.imap and other set living in /home/foo-ios/Mail/.imap) indexing the same underlying mbox files? And I'd be more than happy to hear of a better way to do this with dovecot! (Or to hear that dovecot 2.x works better with iOS devices).

    Read the article

  • My internet drops and will only reconnect if I troubleshoot it or restart my computer.

    - by Paul
    My internet loses its connection and at the same time, and my speaker seems broken if that happens and my mouse will not be smooth as it was before it happens. It already happened before and I didnt mind it. But now, it's happening literally all the time. After startup, my connection seems okay but after some few minutes, even without me doing something, my laptop will just lose its connection. It says that it is not connected and connections are available, but if I check if there are connections where I can connect it shows nothing. Even if, there is actually a wi-fi connection where my phone and other laptops connect. They dont seem to have problem with it. Just my laptop. After that happens, I usually restart my laptop but I discovered that troubleshooting it would somehow repair the problem, but not all the time. It says that restarting my wireless adapter do the job. Are there any long term fix? I mean that could last long or totally erase the problem? I have Atheros AR5B97 Wireless Network Adapter and Microsoft Virtual WiFi Miniport Adapter, both enabled. On the services, I have Wired AutoConfig and WLAN AutoConfig on Automatic. Aspire 4750, Windows 7 (x86)

    Read the article

  • windows 95 virtualization and CPU idle

    - by brtlvrs
    We are in the situation that we need to migrate our windows 95 systems (i know, that is so last century ). There is no hardware for a reasonable price available to run our windows 95 systems. So we are extending the overdue lifetime of it, by making them virtual. Now we see that VMware reports 100% CPU utilization of a windows 95 VM. This is because windows 95 doesn't know how to manage a CPU. For this reason software like rain or Waterfall, or CPUCool are introduced. The they are sending a HLT instruction to the CPU. This causes the CPU to halt and wait for new triggers to work. I've tested the mentioned programs in the VM, but they don't work, but generate errors. anyone has a valid workaround, solution ?? btw. I know that the best solutiuon is to replace the windows95 with windows XP. But in our situation that will take at least 5 years. Our windows 95 systems are running factory process control software.....

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Applications randomly alt-tab? (especially full screen games)

    - by Henry Scotts
    I'm not sure when this began, how it happens, or why it happens, but it is quite bothersome and apparently random. Just randomly throughout the day my computer will just go to the desktop. I could be in a full screen game and it will just immediately alt tab and present the desktop. Or I could be watching a movie and this happens. Sometimes it happens once every three hours and other times (just today actually) it did it twice in the span of 30 seconds. I am positive I am not pressing a hotkey because I launched a game, sat idle, and noticed it alt tab while cleaning up around my room after about 20 minutes. Sometimes it goes days without this happening. Specs: Windows 7 Ultimate 64 Bit, 10 gigs of RAM, GeForce GTX 260, Intel Xeon CPU. I also have basically nothing running when it happens other than the game and FireFox. My FireFox add-ons: Adblock Plus, Download Statusbar, Firebug, FirePHP, lazarus form recovery, tree-style-tabs, yslow. I doubt FireFox is causing the issue but I figured I'd include it anyway because it is the only application I have running when it happens. As for user processes I have running: VCDDaemon (context menu for virtual clone drive), razerhid (mouse), OSD, taskhost, dmw (desktop window manager), anyfullscreengame, audiorepeater, netsession_win, explorer, razerofa, tsvncache, firefox, plugin-container, and EKIJ5000MUI (printer). Whew. Okay. That was a lot of information. If someone could diagnose this I would be most grateful for this has been around with me for years.. Thanks for reading! PS: I doubt it's a virus because I never download illegal software and pretty much only browse Reddit and Stackexchange and play games. If it was a virus it would be a pretty lame one.. Hah..

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • Checking partition alignment with PowerCLI

    - by Julian
    I'm trying to verify that the file system partitions within each of the servers I'm working on are aligned correctly. I've got the following script that when I've tried running will either claim that all virtual servers are aligned or not aligned based on which if statement I use (one is commented out): $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name foreach($vm in $vms){ $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm foreach ($partition in $wmi){ $Details = "" | Select-Object VMName, Partition, Status #if (($partition.startingoffset % 65536) -isnot [decimal]){ if ($partition.startingoffSet -eq "65536"){ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition aligned" } else{ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition not aligned" } $myArr += $Details } } $myArr | Export-CSV -NoTypeInformation "C:\users\myself\Documents\Scripts\PartitionAlignment.csv" Would anyone know what is wrong with my code? I'm still learning about partitions so I'm not sure how I need to check the starting off-set number to verify alignment. EDIT: $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm #foreach ($_ In Get-WMIObject Win32_DiskPartition | Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_} foreach ($wmi| Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_}

    Read the article

  • Any program to help me check whether an ethernet channel can support full-length VLAN packet?

    - by Jimm Chen
    Sometimes, I have to face such a situation that I need to quickly and explicitly know whether a full length VLAN packet can traverse between two RJ45 ports. Yes, I mean 802.1Q ethernet frame with Etype=81 00 (diagram below). What I can do now is: Get two Windows PCs, for each PC, intall Intel Gigabit NIC and Intel specific driver to create a virtual NIC, with VLAN ID=3 assigned. Then connect the two PCs to each of the two RJ45 port. Finally execute ping to generate a full-length ethernet packet. ping -f -l 1472 <dest-IP> This way, I can be sure that the sent packet has the maximum "IP data payload" of 1500 bytes(8 bytes of ICMP header and 1472 bytes of ICMP data). If the ping gets reply, I know that the ethernet channel support full-length VLAN packet. From my experiment, some home switch or broad band routers(e.g. Linksys WRT54G) does not support full-length VLAN packet switching, so only ping -f -l 1468 succeeds. You see, I have to use an expensive Intel NIC to carry on that test, quite inconvenient. You know, for most laptop today, they do not equip an Intel NIC, and, even it is an Intel NIC, Intel VLAN driver, Intel has limitations on the models on which VLAN driver can be installed. So, my question is: Is there a small program that can let me send a full-length VLAN packet without installing a dedicated VLAN driver? Or better, the program has a stock feature that does the very job for my situation. Windows programs preferred, Linux solution welcome. Simpler the program, the better. Thank you.

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

  • Node js server not responding outside localhost centos

    - by David Martinez
    I'm running a basic express server from CentOS but for some reason it is not responding outside of localhost, I have tried everything I have found on google but nothing works so far. This is my express server: app.listen(3000,"0.0.0.0"); If I do curl http://localhost:3000/ in the server it works fine. If I curl to the ip of the server it doesn't work. I already changed my iptables num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 There is currently a apache server running on port 80 with no problems. I also tried setting a VirtualHost on apache but it didn't work either: <VirtualHost *:80> ServerName SubDOmain.MyDomain.com ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ ProxyPreserveHost on </VirtualHost> There is another virtual host working fine that redirects to another DocumentRoot. I'm running Node on root for testing purpose, but the node application owner is another user. All folders have 705 and files 664 Edit: I stopped apache and run my node app on port 80 and it working fine, I could access node app from my ip and domain.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >