Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 196/278 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Problems sending email using .Net's SmtpClient

    - by Jason Haley
    I've been looking through questions on Stackoverflow and Serverfault but haven't found the same problem mentioned - though that may be because I just don't know enough about how email works to understand that some of the questions are really the same as mine ... here's my situation: I have a web application that uses .Net's SmtpClient to send email. The configuration of the SmtpClient uses a smtp server, username and password. The SmtpClient code executes on a server that has an ip address not in the domain the smtp server is in. In most cases the emails go without a problem - but not AOL (and maybe others - but that is one we know for sure right now). When I look at the headers in the message that was kicked back from AOL it has one less line than the successful messages hotmail gets: AOL Bad Message: Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Mon, 18 Jun 2012 09:48:24 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Good Hotmail Message: Received: from mail.domainofsmtp.com ([###.###.###.###]) by subdomainsof.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Thu, 21 Jun 2012 09:29:13 -0700 Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Thu, 21 Jun 2012 11:29:03 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Notice the hotmail message headers has an additional line. I'm confused as to why the Web server's name and ip address are even in the headers since I thought I was using the SmtpClient to go through the smtp server (hence the need for the username and password of a valid email box). I've read about SPFs, DKIM and SenderID's but at this point I'm not sure if I would need to do something with the web server (and its ip/domain) or the domain the smtp is coming from. Has anyone had to do anything similiar before? Am I using the smtp server as a relay? Any help on how to describe what I'm doing would also help.

    Read the article

  • htaccess not properly rewriting urls

    - by Cameron Ball
    This is a bit of a weird one. I'm doing some work on a server, and I need rewrite rules for directories that actually exist (in some cases, they are more than one level deep) At the moment my .htaccess looks like this: RewriteEngine on RewriteRule ^simfiles/([-\ a-zA-Z0-9:/]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] And this is working OK, for example, a url like: mydomain.com/sifmiles/my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files Or in the case of a directory structure that is deeper than one level: mydomain.com/sifmiles/my-files/more-of-my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files/more-of-my-files I wrote the regex so that it won't match things with a . in the path, because there are css and js files which reside in simfiles/somedirectory, and if I redirect everything then these cannot be loaded. I tried a configuration like this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^simfiles/([-\ a-zA-Z0-9:/\.]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] But that doesn't work, things still don't load properly. So my first question is, how can I achieve this "properly"? I don't like my solution because it means redirects won't occur if the folder has a . in its name. My second problem, is that while the redirection is happening properly, the url becomes: http://mydomain.com/?portal=simfiles&folder=my-files I want the URL to remain clean, like: http://mydomain.com/sifmiles/my-files How can I achieve this?

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • multiple folder structures/views

    - by Sojourner
    Newbie. Setting up a server for a law firm. Want to set up the folder structure as follows: Client 1 Name -- Matter 1 (i.e. setting up corporation) -- Matter 2 (i.e. divorce) -- Matter 3 (i.e. setting up trust) Client 2 Name -- Matter 1 Client 3 Name -- Matter 1 and so on. But the attorneys prefer navigating a folder structure, more based on what case type: Civil -- Client 1 Name (i.e. Smythe) -- Client 2 Name (i.e. Jones) -- Client 3 Name (i.e. Johson) -- Landlord/Tenant -- Client 1 Name (i.e. Jones) -- Client 2 Name (i.e. Johson) -- Class Action Suits -- Suit 1 -- Suit 2 Personal Injury -- Client 1 Name -- Client 2 Name -- Client 3 Name Criminal -- Client 1 Name (i.e. Smythe) I'd like to know if it's possible to set up the server with the first folder structure (it's more organized and easier to employ scripts), while having the second folder structure available for users who find it easier to deal with the same types of cases grouped together.

    Read the article

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • Outlook and IMAP - Outlook doesn't allow the Drafts and Trash folders to sync with the respective IMAP folders

    - by Matt
    I'm using Outlook 2007 and Outlook 2010 against an IMAP server (the problem exists across many, like Gmail, you name it). Outlook lets you set your Outlook "Sent" folder to map to the IMAP server's Sent folder (the other choice is to map your Outlook Sent to your Personal Folders Sent) - this is good. When you send a message from Outlook and then look in the sent folder of the IMAP server (e.g. from a different client or from a browser), the messages are there. This is the behavior I want. Outlook does NOT support the same behavior for Drafts and Trash. In both cases, items deleted (or Drafts saved) in Outlook go in to Outlook's local folders and do NOT show on the IMAP server's Trash or Drafts folders. Same problem in reverse. Thunderbird on the other hand does support the proper mapping of Drafts, Sent and Trash. I expected this to be IMAP-specific but it appears to be client specific. What does Outlook implement it this way and is there a workaround?

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Google Chrome doesn't respond user actions correctly

    - by Carlos A. Junior
    Recently I've changed my OS to Ubuntu 12.04 (Cinnamon, 64 bits) from Mint 13 (KDE, 64 bits) and one same bug still appears on new installation. The Google Chrome it seems that don't refresh (repaint) page based on my interactions. Example: When i'm try comment an Youtube vídeo, when i click on textarea, o cursor don't appear inside of textarea, BUT, when/if i change to another tab and return the cursor appears...OK... If i start write some text...according i'm typing the chars don't appers...again if i change to another tab and return the typed text appears on textarea. Other cases that this bug appears: Modal boxes link...don't show the modal; Forms inside modal boxes don't show typed chars; The common Discus comment plugin don't work when focused; I don't have any idea of reason of this bug. (video driver, window manager, Chrome bug ?, i don't know) Any idea to solve this ? Additional informations: Google Chrome 22.0.1229.79 (Official Build 158531) OS Linux WebKit 537.4 (@129177) JavaScript V8 3.12.19.11 Flash 11.3.31.331 User Agent Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Command Line /opt/google/chrome/google-chrome --flag-switches-begin --flag-switches-end Executable Path /opt/google/chrome/google-chrome Profile Path /home/carlos/.config/google-chrome/Default Kernel version: 3.2.0-31-generic-pae Ubuntu 12.04 Best regards.

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • There's a stray current flowing from my monitor through the VGA cable to the PC. Is this safe?

    - by EApubs
    I have two monitors in my machine. One is an old LCS samsung monitor. Recently, I started to hear a small hum in my speakers (subwoofer) and replaced them. The new one also got the issues then I found our that its a grounding issue. I unplugged the PC's power chord. The monitor is still switched on. When I checked, there's current in the earth pin (ground pin). When I unplug the monitor, there's no current and the speaker is normal. Now, I have moved that monitor to my dad's machine and took his monitor. My question is, is it a big issues? The house's earthing system is working and its grounding the current. I won't feel it if I touch the machine like in many other cases. But still, is it good to keep that monitor attached to my machine? Can it harm the computer? What should I do?

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Is it possible to setup an internal test email server to keep all mail sent to it?

    - by MattGrommes
    We have a need at my work to setup a test email server that will take all mail sent to it for delivery and instead just dump it into an account for later retrieval. I've been out of the email server configuration game long enough that I think that's possible but I don't know for sure. As a more specific example of what we need: We have code that sends emails to outside clients in certain cases. We want to point our code to a test server that will accept those emails, but not let them get to the outside world (yes, it's happened before, oops). We then need to be able to verify that Email X would have gotten sent to Client Y if we had sent to the real server. As a bonus, we have a error email alias on our real server that goes to the programmers that we would like to keep getting email from. So anything sent to that alias on the test server would forward to our real server for delivery. My preference is for postfix but our IT staff seems set on using sendmail (or Exchange) for everything so hints/pointers for either server would be helpful. Thanks a lot.

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • GNU screen multiuser mode is broken in OS X 10.6 (Snow Leopard)

    - by schustafa
    I'm using GNU screen for remote pair programming. Let's call the local account for the remote user 'pairpair'. I have the following lines in my .screenrc: multiuser on acladd pairpair I have run sudo chmod u+s /usr/bin/screen. However, when the remote user tries to connect to my screen with the command screen -r [my_account_name]/[pid_of_screen] I receive the following message: Attach attempt with bad pid(xxx) The pid listed in the error message matches the pid of the screen process run by the remote user. The remote user's screen process hangs; my screen session continues happily along after the error message disappears. I've tried using both the built-in screen (at /usr/bin/screen) and the screen available from MacPorts, but I get the same error in both cases. This worked on OS X 10.5 (Leopard). I've googled around for the error message, but most of the hits relate to some BSD bug from 2003 or so (which was fixed). Has anyone else seen this behavior? Does anyone have any idea how to make multiuser support in screen work in SL?

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >