Search Results

Search found 10963 results on 439 pages for 'documentation generation'.

Page 348/439 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Intel Motherboard Lightning Victim Dies Hard

    - by Stetson RDT
    Today, I have a more hardware-related question. I have an Intel board, and I really do not know which board it is, I built the machine for a relative, but he forgot to keep the documentation. Long story short, the computer was disconnected during a lightning storm, but a lightning strike travelled in via the ethernet cable (It was directly connected to a power brick commonly seen on those long distance ISP Wireless transmitters), and the motherboard was shocked. I am attempting to get this PC going. The problem is as follows: The computer will randomly reboot, just in the middle of anything as it pleases. May load to EFI (or whatever BIOS is nowadays), may load to bootloader, may even get to the OS. But before 5 minutes is up, the system will always die. Out of curiosity, I plugged my voltmeter in to a molex connector. On the 5V side, it gets a good, consistant +5.13V. On the 12V side, it fluctuates, as follows: Upon immediate startup, it soars to 12.11-12.13V. It will now do one of two things: it will immediately jump down to 12.04-12.05V, or hover for about a minute at 12.11-12.13, then jump down. It seems the longer the voltage stays at 12.11-12.13, the shorter the machine will stay running. Also, post codes, whenever the machine locks up, but does not die hard, seem to be between "AA" and "AC". Does this make any sense to anybody? Do you all think this motherboard is salvageable? It was an expensive bugger, and I'd prefer to not replace it.

    Read the article

  • Windows Server wbadmin recover with commas

    - by dlp
    I want to do a recovery of files with commas in their names from the command line, ala: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File, With Comma.txt,C:\Path\To\File 2, With Comma.txt" So there are two files: C:\Path\To\File, With Comma.txt C:\Path\To\File 2, With Comma.txt The problem is wbadmin assumes commas separates each file, so it sees 4 files specified instead of 2. I've tried putting a \ in front of commas that are part of the file names like so: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File\, With Comma.txt,C:\Path\To\File 2\, With Comma.txt" but it doesn't work, it just says there's a syntax error. The documentation on Technet doesn't seem to mention anything that'll help either. OS is Windows Server 2008 R2. A clarifying comment: I've changed the file names to be different than the actual names to be less revealing, but I also see I dumbed it down too much. The comma can occur either in the file name itself like C:\Path\To\File, With Comma.txt' or in the path to the file, like:C:\Path, To\Other\File.txt`.

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • Why is REMOTE_ADDR only sometimes available as an Apache environment variable?

    - by Xiong Chiamiov
    To avoid having to parse X-Forwarded-For in Varnish, I'm trying to just set a header on the SSL terminator (currently Apache) that stores the direct client IP in a header. On our development machine, this works: RequestHeader set X-Foo %{REMOTE_ADDR}e However, in staging it doesn't. Specifically, the header is empty, as illustrated by both varnishlog: 13 TxHeader b X-Foo: (null) (On the development machine, this shows the IP address as expected.) Similarly, logging REMOTE_ADDR shows that it only appears to be populated on the dev machine: # Config LogFormat "%{X-Forwarded-For}i %{REMOTE_ADDR}e" combined CustomLog "/var/log/httpd/access_log" combined # Log file, staging <my ip> - # Log file, development <my ip> <my ip> Since the dev machine is, well, a dev machine, it is different in a number of ways; however, I can't track down which difference is causing this. The versions of Apache are the same (2.2.22), and I don't see anything relevant in any of the standard config files or /etc/sysconfig/httpd. And the rest of the system is reasonably similar, since they're built off the same CentOS 5 base image. I can't even tell from the Apache documentation whether REMOTE_ADDR is expected to exist or not as an environment variable, but it clearly works on one machine, whether by fluke or design, and the inconsistency is driving me mad.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • My scanner isn't responding. It is connected to a SCSI interface using Windows XP

    - by Bob
    I have a Microtek ScanMaker 9600XL scanner. The best part about it is that it is 12x17 inches. Wowza! The worst part about it is that I've had it working at one point, with the same cable, same card, same computer, but have since re-installed Windows XP on it. Currently it will turn on, and blink the Power and Ready lights. They should be solid. I've done my best to find documentation for this, but all I've really gotten is the content on the Microtek site. I've tried turning the scanner on, then the PC. Turning the PC on, then the scanner. When I try launching the software it pops up a dialog saying "ScanWizard Pro can't find any scanners! Use SCSI Check to find a scanner." I know the scanner has a pair of little buttons on the back. These cycle up/down a counter. I think it goes 0-7. Any thoughts on what that does, or how to proceed troubleshooting? I think my next step is to try each of those numbers, and do both pc booted first, and scanner booted first for each of those numbers...

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • How do I connect to and factory reset a Catalyst 3560 Switch?

    - by Josh
    My company just bought another company. In their server room they had some older hardware, which I would like to repurpose. One of these is a Cisco Switch: C3560G-48TS-S. I found some instructions about this switch here but this is not a guide for a beginner. I have no idea how to connect to this thing to begin running the commands. It says Configure the PC terminal emulation software for 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control. But I can't find anything on how to do this (assuming with telnet?) or even what program to use. I also don't know how to find the IP address of the device to connect to it. My research also says once I get in there, I need to run clear config all Is this the right command? Also, what if I can't get the username and password for these devices? Is there some way to factory reset (my only experience is with devices that have a hardware reset button) EDIT: I should note that when I push the button on the front the three lights blink, which according to the documentation indicated the switch is configured and "not available for express setup"

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • pgpool2+streaming replication failover on only 2 servers?

    - by aneez
    I am trying to configure pgpool2 and postgresql 9.1 to handle failover. I currently have streaming replication running, and are using pgpool2 for read-only load balancing. I have 2 servers in my setup, both running postgresql - 1 master and 1 slave. The master is also running pgpool2. My question is how do I configure this setup to handle failover? Specifically in the case that the master crashes, and the slave has to take over and run pgpool2 as well. Most documentation and examples I have been able to find assumes that pgpool2 is running on a separate server and thus "never" crashes. I may or may not be attacking the problem using the wrong tools. In my production setup I have a total of 3 identical servers all in independent locations. The main goal of the setup is to achieve a high uptime. Thus failover should be automatic, and bringing a failed node back up should cause only minimal downtime. I want all 3 nodes to be as close to identical as possible, and be able to run with just 1 or 2 nodes available. And if possible I want to use load balancing to improve performance. If anyone can help me gain some insight into how to do this using my current setup or suggest a different/better setup. Thank you!

    Read the article

  • Formatting not retained in paste from Ditto (clipboard manager). Plain text pasted instead [Solved] Add supported types

    - by Jeff Kang
    I'm trying to use Ditto on the Ditto documentation. If I were to copy the table of contents, then paste it (without Ditto) to the word processor, I get http://i.imgur.com/V1GU3.png, and the formatting is maintained. As as a result of the copy operation, the table of contents also goes into the Saved Items List (= History List = Lists the Clips saved from the Clipboard) in Ditto’s Main Window: I open a blank document to paste from Ditto instead of the default clipboard, and press either Ctrl-`, the default Ditto window activation Global Hot Key, or click the tray icon. From this point, I can do 3 things to close the Ditto window, and place the item on the clipboard (the default clipboard?). Select the item, and press Enter Put the cursor on the item, and double-click Select the item, and press Ctrl-c 1) and 2) send a right-click where the cursor is, after the Ditto window closes (presumably to have the paste option ready to access?): Ctrl-c just closes the Ditto window. Whichever method is used, the contents are pasted in what I believe is plain text: http://i.imgur.com/mQAZH.png How do I keep the formatting that the default clipboard keeps? Thanks.

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • Should the MAC Tables on a switch Stack be the same between sessions?

    - by Kyle Brandt
    According to Cisco's documentation: "The MAC address tables on all stack members are synchronized. At any given time, each stack member has the same copy of the address tables for each VLAN." However, when logged into the switch I see the following: ny-swstack01#show mac ad | inc Total Total Mac Addresses for this criterion: 222 ny-swstack01#ses 2 ny-swstack01-2#show mac ad | inc Total Total Mac Addresses for this criterion: 229 ny-swstack01-2#exit ny-swstack01#ses 3 ny-swstack01-3#show mac ad | inc Total Total Mac Addresses for this criterion: 229 ny-swstack01-3#exit ny-swstack01#ses 4 ny-swstack01-4#show mac ad | inc Total Total Mac Addresses for this criterion: 235 ny-swstack01-4#exit ny-swstack01#show mac ad | inc Total Total Mac Addresses for this criterion: 222 Going back and forth this isn't just because it is changing over time either, within certain sessions there are entries that I don't seen from the master session. We are currently waiting to hear back on CIsco from this, but has anyone run into this before? I stumbled upon this when looking into Unicast flooding, one of the hosts that is a destination MAC of flooding has a MAC entry that appears in session 3, but nowhere else. Also, I checked an all sessions show the same aging time.

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • VirtualBox bridged network not working as expected

    - by iby chenko
    I am having hard time getting Bridged network to work with VirtualBox. Idea is to have host as well as one or more guests on same LAN. Using NAT (default) I do get access to internet and any node on the LAN when working from one of the VM guests. However, no LAN node including host can access (or ping) guest in VM. I need to be able to use any guest as if it was a physical computer on the network (need to be accessed by any machine on LAN). According to my understanding of the VirtualBox documentation, this should be Bridged mode. I think I set it correctly, well, actually there is not much to it: 1. select Bridged mode in VM network setup 2. select physical NIC of the host to connect bridge to 3. start VM When I do this, each VM does get new IP address that corresponds to LAN settings : 192.168.1.100 192.168.1.102 192.168.1.103 etc. where host is 192.168.1.80 / 255.255.255.0 (IP addresses above 100 are served by DHCP server). This seem to be correct based on what I know about ethernet. From VM I can ping other nodes like 192.168.1.50 etc. and I still get ethernet access. So far so good... But I STILL cannot ping any of the other VMs (running ones of course). I cannot ping them from other VMs, from host or from other nodes on the LAN. Aside from fact that IP addresses handed to guests are now local, this still acts same as NAT. What is going on? What am I missing? Regards, I

    Read the article

  • How does the internet protocol handle network card numbers?

    - by Giorgio
    I know that data packets sent over the internet carry the source and destination IP address, so that the protocol can route the data to the correct destination and keep track of the source address of the packet. But what about the network card address? As far as I know, each network card has a unique identification number. Is this also transmitted with a TCP/IP packet? And when a packet is received at its destination, how is the IP address mapped to a network card number? In other words. On the sender part: does the sender store the sender network card number in the IP packets that it is sending? On the receiver part: which component maps the IP address to the receiver's network card number when a packet is received? E.g., in a home network, does the modem / router map the destination IP address of an incoming packet to a network card number and deliver the packet directly to that network card? A link to documentation on these topics would be of great help.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >