Search Results

Search found 1822 results on 73 pages for 'collaborative filtering'.

Page 56/73 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • Exchange 2010 issuing NDRs to Hotmail/Live & few other domains on receipt of message

    - by John Patrick Dandison
    I'm working through a beast of an issue at the moment. Exchange 2010 single server on prem Hybrid deployment to Office 365 ESMTP filtering turned off on ASA Certain domains (most consistently, Hotmail/Live) cannot send us mail. At one point, we couldn't send out either, but I created a new Send Connector that forces HELO instead of EHLO. I turned on SMTP logging, an example of the failed inbound message connection is below. I've read that it could be that reverse DNS is the problem, i.e., the exchange banner smtp address needs to reverse-DNS back to the same IP. Since it's the default exchange connector, its banner is the server's name, but the DNS name of the MX record is different. I'm waiting for the PTR records to update to reflect the internal name as well. Is that the right direction? Is this all DNS or something different? SMTP Session Log (single failed session for illustration): SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders 220 ExchangeServerName.internalSubDomain.example.com Microsoft ESMTP MAIL Service ready at Mon, 15 Oct 2012 09:57:24 -0400 EHLO col0-omc3-s4.col0.hotmail.com 250-ExchangeServerName.internalSubDomain.example.com Hello [65.55.34.142] 250-SIZE 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-STARTTLS 250-X-ANONYMOUSTLS 250-AUTH NTLM LOGIN 250-X-EXPS GSSAPI NTLM 250-8BITMIME 250-BINARYMIME 250-CHUNKING 250-XEXCH50 250-XRDST 250 XSHADOW MAIL FROM:<[email protected]> 08CF5268DABBD9AA;2012-10-15T13:57:24.564Z;1 250 2.1.0 Sender OK RCPT TO:<[email protected]> 250 2.1.5 Recipient OK XXXX 1282 LAST Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXX from COL002-W38 ([65.55.34.135]) by col0-omc3-s4.col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command " XXXX 15 Oct 2012 06:57:24 -0700" Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXXXX <[email protected]> Tarpit for '0.00:00:05'

    Read the article

  • terminal-window viewer for tab-delimited files in *nix?

    - by khedron
    I work with a lot of tab-delimited data files, with varying columns of uncertain length. Typically, the way people view these files is to bring them down from the server to their Windows or Mac machine, and then open them up in Excel. This is certainly fully-featured, allowing filtering and other nice options. But sometimes, you just want to look at something quickly on the command line. I wrote a bare-bones utility to display the first<n>lines of a file like so: --- line 1 --- 1:{header-1} 2:{header-2} 3:... --- line 2 --- 1:{data-1} 2:{data-2} 3:... This is, obviously, very lame, but it's enough to pipe through grep, or figure out which header columns to use "cut -f" on. Is there a *nix-based viewer for a terminal session which will display rows and columns of a tab-delimited file and let you move the viewing window over the file, or otherwise look at data? I don't want to write this myself; instead, I'd just make a reformatter which would replace tabs with spaces for padding so I could open the file up in emacs and see aligned columns. But if there's already a tool out there to do something like this, that'd be great! (Or, I could just live with Excel.)

    Read the article

  • Reference entire excel 2007 sheet to another sheet

    - by Keikoku
    I have one sheet with 100 rows of data. I have a second sheet that will use the same data but with filters applied. In fact, I will have a dozen sheets, each with different filters. My goal is to have references from the every sheet to the first so that prior to filtering, they all contain exactly the same data. This way I only have to modify one sheet and all sheets will reflect the changes. The purpose is to create external links from word to excel to display specific rows, but there appears to be a limitation to linking where it displays absolutely everything that you see on the sheet itself (and each view must be different). I can manually reference the first cell and then drag the black box to easily expand it to the required number of rows, but that would require me to go into each sheet and drag the black box again whenever I add new entries to the master copy. Is there an easy way to do this? Note that the issue is the same as Easy way for users to update linked documents Excel 2007, except this time I am using a different approach. Solutions to both would be welcome.

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Best grep-like tool

    - by e-satis
    I do in file search a lot, and used to love grep. Then I learn the existence of egrep, so I switched to benefit from the advanced regexp. Then I discovered the Eclipse search tool. Much easier to use that grep. Then I found ack : fast, easy, powerful. And now I use grin, which is smooth for pythonistas. I know there is also a couple of this kind of tools with a GUI. So what tool do you use, and why do you think it's the best. Practical features generally are : fast to fire and use; speedy processing; automatically ignore useless files; colored output; output lines, filename, context; allow complex regexp; allow a custom filtering and ouput; GUI + command line intergation; let you open an editor from the result set. There are some related posts on SO : http://stackoverflow.com/questions/87350/what-are-good-grep-tool-for-windows http://stackoverflow.com/questions/981601/colorized-grep-viewing-the-entire-file-with-highlighting http://stackoverflow.com/questions/1028107/is-there-some-unix-util-that-will-allow-me-to-grep-multiple-files-with-little-type http://stackoverflow.com/questions/1027906/unix-find-grep-syntax-vs-awk

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • tc u32 --- how to match L2 protocols in recent kernels?

    - by brownian
    I have a nice shaper, with hashed filtering, built at a linux bridge. In short, br0 connects external and internal physical interfaces, VLAN tagged packets are bridged "transparently" (I mean, no VLAN interfaces are there). Now, different kernels do it differently. I can be wrong with exact kernel verions ranges, please forgive me. Thanks. 2.6.26 So, in debian, 2.6.26 and up (up to 2.6.32, I believe) --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 Here, "kernel" matches two bytes in "protocol" field with 0x8100, but counts the beginning of ip packet as a "zero position" (sorry for my English, if I'm a bit unclear). 2.6.32 Again, in debian (I've not built vanilla kernel), 2.6.32-5 --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 at 20 flowid 1:200 Here, "kernel" matches the same for protocol, but counts offset from the beginning of this protocol's header --- I have to add 4 bytes to offset (20, not 16 for dst address). It's ok, seems more logical, as for me. 3.2.11, the latest stable now This works --- as if there is no 802.1q tag at all: tc filter add dev internal protocol ip parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 The problem is that I couldn't find a way to match 802.1q tag so far. Matching 802.1q tag at past I could do this before as follows: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 match u16 0x0ed8 0x0fff at -4 flowid 1:300 Now I'm unable to match 802.1q tag with at 0, at -2, at -4, at -6 or like that. The main issue that I have zero hits count --- this filter is not being checked at all, "wrong protocol", in other words. Please, anyone, help me :-) Thanks!

    Read the article

  • Reliable router with good VPN and WAN Throughput [closed]

    - by Asdande
    I have 2 cisco rv180 VPN router. These routers are giving me lots of problems. The webpages wont load correctly, slow response to load webpages plus other many issues. I have several cases pending with cisco. I give up on these routers. I would like to know if you guys can recommend me a reliable router for our 3 branches (NY - main, SC and FL). In NY- main office, we have 55 users. In SC branch, 6 users. In Florida we only have 1 (will grow soon). I need a router capable of support: 3 VPNs Site-to-Site connection VPN throughput of at least 40-50 Mbps WAN throughput at least 100 Mpbs and up PPTP Server for at least 5 PPTP users Web filtering - all users need access to internet Good Firewall Port forwarding for FTP Server - able to show the public IPs of FTP users (rv180 cannot do that, just shows me router's LAN interface IP, opened a case with cisco, now escaleted to level 2, still no answer or workaround) Dual WAN ports for balance or backup internet. Gigabit WAN/LAN ports Price between $400-$500 range. I was thinking on the TP-LINK TL-ER6120 or TL-ER6020 according to the review on smallnetbuilder.com http://www.smallnetbuilder.com/lanwan/lanwan-reviews/31983-tp-link-tl-er6020-safestream-gigabit-dual-wan-vpn-router-reviewed but I don't want to make another mistake as I did when I bought the cisco RV180. Thank you in advance,

    Read the article

  • Configuration of Server root email - Change Address and Name on outgoing email

    - by JTWOOD
    As a newbie Postfix user, I've gotten so far and now I am stuck with a SMALL problem. I would like to configure my local network servers to send alerts and like using the following: 1) From address: [email protected] 2) From name: Hostname I can get #1 to work fine using smtp_generic_maps The problem is that on my email client, the name is listed as "root" - as in the header shows the following: Date: Sun, 29 Jul 2012 13:21:01 -0400 (EDT) From: [email protected] (root) To: undisclosed-recipients:; I'd like to change it to "From: [email protected] (Zeus)" I imagine that this can be done in the headers_check, but so far I haven't gotten anything to work and before I waste a ton of time trying to get this to work, I'd like to make sure I am on the right track. My aliasing and genericmaps are set up correctly (As far as I can see and know - the results are correct!). I just want to change that last bit in the From field to reflect the hostname. I would also like to add something in the subject of the outgoing messages for easy filtering - something like Subject: [Zeus.domain] - "Original Subject" Any suggestions are much appreciated. Thanks!

    Read the article

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • How to quickly check if two columns in Excel are equivalent in value?

    - by mindless.panda
    I am interested in taking two columns and getting a quick answer on whether they are equivalent in value or not. Let me show you what I mean: So its trivial to make another column (EQUAL) that does a simple compare for each pair of cells in the two columns. It's also trivial to use conditional formatting on one of the two, checking its value against the other. The problem is both of these methods require scanning the third column or the color of one of the columns. Often I am doing this for columns that are very, very long, and visual verification would take too long and neither do I trust my eyes. I could use a pivot table to summarize the EQUAL column and see if any FALSE entries occur. I could also enable filtering and click on the filter on EQUAL and see what entries are shown. Again, all of these methods are time consuming for what seems to be such a simple computational task. What I'm interested in finding out is if there is a single cell formula that answers the question. I attempted one above in the screenshot, but clearly it doesn't do what I expected, since A10 does not equal B10. Anyone know of one that works or some other method that accomplishes this?

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • Dropbox picture sync: Skip RAW files?

    - by Steven Lu
    I like the convenience of having Dropbox keep track of my photos because it tends to work with my devices over 3G (I am often tethering to my phone with my iPad and Macbook) as well as Wifi, but it's a waste of network traffic to sync the raw files from my camera or memory card. It clutters up the dropbox list and the files are just huge. Is there a way to configure the Dropbox client so that it ignores a certain file extension for the picture sync? Also, I suspect that if I just go and delete the raw files, that the next time I plug in the memory card and tell Dropbox to sync, it will re-download the raw files. Which would be terribad. I could switch to iCloud for Photo Stream, I suppose, but there will be no access via 3G that way. And I've already got years of experience with Dropbox so I know it's going to just work. I think any method that works for filtering files to exclude from sync on Dropbox in general should work here too. Edit: Wow there are 19k votes for this exact request.

    Read the article

  • Strange WiFi problem - network is "not connected" but works

    - by GalacticCowboy
    Background I'm using a Windows XP tablet PC connected to my home network. I do broadcast the SSID, but otherwise the wireless network is locked down as tight as I can make it - WPA2 with a strong key, MAC filtering, etc. I've had this computer for about 4 years and the router for about 6 months. Before that, the previous router was set up in the same way, and I've never had any particular problems. Problem This morning - as I type this, as a matter of fact - Windows is reporting that none of my network connections are connected. Yet somehow my Internet connectivity still works! ipconfig reports a valid IP address, domain, etc., even though Windows apparently doesn't know about it. I tried repairing the connection and it had no effect - the repair eventually timed out. I'm pretty sure that I could reboot the computer and/or the router, etc. and it would fix the problem. However, I'm more interested in knowing if any of you have ever seen anything like this, and what might have caused it? Since it works I'm not inclined to mess with it too much. My concern is that it's a precursor to bigger problems.

    Read the article

  • Linux as a gateway (no NAT)

    - by Hugo
    I'm trying to configure a linux server as a gateway/router, but I can't get it to work, and all information I've managed to find is NAT-related. I have a public IP block for the gateway and devices behind it, so I want the gateway to simply route packets to the internet - again: no NATing! I've managed to get the gateway to access the internet successfully (that was just a matter of configuring the IP and GW), and the computers behind it can communicate with it. [EDIT: more info] This is actually an IPv6 block (2800:40:403::0/48) (but I've found that most utilities and instructions can be easily adapted from IPv4 to IPv6 with little hastle). The server has too ports: wan: 2800:40:403::1/48 lan: 2800:40:403::3/48 One of the computers behind it is connected to it via a switch; 2800:40:403::7/48 The wan interface on the server can ping6 www.google.com without issues. The lan interface on the server and the client can mutually ping each other without issues (as well as SSH, etc). I've tried setting the server as a default gateway for the client, with no luck: client # route -A inet6 add default gw 2800:40:403::3 dev eth1 server # cat /proc/sys/net/ipv6/conf/all/forwarding 1 I don't want any filtering/firewalling/etc, just plain routing. Thanks.

    Read the article

  • Exchange 2003 inbound routing issue

    - by user565712
    Just recently we started experiencing inbound routing issues. Email adddressed to [email protected] is intermittantly translated to [email protected]. This is happening for several users and, as stated, is intermittant. I don't know where to start looking for the solution. Is this an Exchange issue? A DNS issue? We have a single Exchange server inside our network with an FQDN of server.domain.local with a single SMTP Virtual Server. The Advanced properties of the Delivery tab of the Virt Server has an empty Masquerade Domain textbox and the value for the FDQN text-box is set to the domain itself, domain.com. The DNS record for domain.com is a CNAME entry referencing www.domain.com. Is this somehow related to the problem? I checked the headers of the inbound messages that generated NDRs as a result of being sent to [email protected] and nowhere in the header is www.domain.com mentioned. To make my life even more difficult, we use Postini as a third-party SPAM filtering service. Our MX records point to the Postini servers and Postini delivers the messages to our server. Perhaps it is Postini that is mucking things up? sigh I'm having trouble with this one and the intermittent aspect is making it that much more difficult for me. Any ideas?

    Read the article

  • iptables rules for DNS/Transparent proxy with ip exceptions

    - by SlimSCSI
    I am running a router (A Netgear WNDR3700 if that matters) with dd-wrt. For content filtering I am using OpenDNS. I wanted to make sure a user could not bypass OpenDNS by putting in their own name servers, so I have a rule to catch all DNS traffic. iptables -t nat -A PREROUTING -i br0 -p all --dport 53 -j DNAT --to $LAN_IP I did have one computer on the network I wanted to allow past OpenDNS filters. On that machine I manually set the name servers, and created another rule to allow it to pass iptables -t nat -I PREROUTING -i br0 -s 192.168.1.2 -j ACCEPT This worked well. Today, I installed a transparent proxy (squid) on the router and added these rules: iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT This also works, however the 192.168.1.2 address does not get routed through squid. How can I have 192.168.1.2 (and maybe others in the future) by-pass the port 53 rules, but not the port 80 rules?

    Read the article

  • ip6tables blocking output traffic

    - by jmccrohan
    My OpenVZ VPS is blocking outbound IPv6 traffic, but correctly filtering inbound IPv6 traffic. Below is my ip6tables-restore script. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p udp -m udp --dport 1194 -j ACCEPT -A INPUT -p tcp -m tcp --dport 51413 -j ACCEPT -A INPUT -p udp -m udp --dport 51413 -j ACCEPT -A INPUT -m limit --limit 5/min -A INPUT -j REJECT --reject-with icmp6-adm-prohibited -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT ICMPv6 traffic is still able to pass both inbound and outbound. When I flush these rules using -F, outbound traffic flows fine. What am I missing here? EDIT: It appears that ip6tables is marking ESTABLISHED packets as INVALID. Consequently, the outbound traffic is NOT actually being blocked. The reply packets are not allowed inbound again, hence appearing like blocked outbound traffic. Allowing INVALID packets inbound solves the outbound issue, but also renders the inbound filter useless.

    Read the article

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • Social Business Forum Milano: Day 2

    - by me
    @YourService. The business world has flipped and small business can capitalize  by Frank Eliason (twitter: @FrankEliason ) Technology and social media tools have made it easier than ever for companies to communicate with consumers. They can listen and join in on conversations, solve problems, get instant feedback about their products and services, and more. So why, then, are most companies not doing this? Instead, it seems as if customer service is at an all time low, and that the few companies who are choosing to focus on their customers are experiencing a great competitive advantage. At Your Service explains the importance of refocusing your business on your customers and your employees, and just how to do it. Explains how to create a culture of empowered employees who understand the value of a great customer experience Advises on the need to communicate that experience to their customers and potential customers Frank Eliason, recognized by BusinessWeek as the 'most famous customer service manager in the US, possibly in the world,' has built a reputation for helping large businesses improve the way they connect with customers and enhance their relationships Quotes from the Audience: Bertrand Duperrin ?@bduperrin social service is not about shutting up the loudest cutsomers ! #sbf12 @frankeliason Paolo Pelloni ?@paolopelloniGautam Ghosh ?@GautamGhosh RT @cecildijoux: #sbf12 @frankeliason you need to change things and fix the approach it's not about social media it's about driving change  Peter H. Reiser ?@peterreiser #sbf12 Company Experience = Product Experience + Customer Interactions + Employee Experience @yourservice Engage or lose! Socialize, mobilize, conversify: engage your employees to improve business performance Christian Finn (twitter: @cfinn) First Christian was presenting the flying monkey   Then he outlined the four principals to fix the Intranet: 1. Socalize the Intranet 2. Get Thee to a Single Repository 3. Mobilize the Intranet 4. Conversationalize Your Processes Quotes from the Audience: Oscar Berg ?@oscarberg Engaged employees think their work bring out the best of their ideas @cfinn #sbf12 http://pic.twitter.com/68eddp48 John Stepper ?@johnstepper I like @cfinn's "conversify your processes" A nice related concept to "narrating your work", part of working out loud. http://johnstepper.com/2012/05/26/working-out-loud-your-personal-content-strategy/ Oscar Berg ?@oscarberg Organizations are talent markets - socializing your intranet makes this market function better @cfinn #sbf12 For profit, productivity, and personal benefit: creating a collaborative culture at Deutsche Bank John Stepper (twitter:@johnstepper) Driving adoption of collaboration + social media platforms at Deutsche Bank. John shared some great best practices on how to deploy an enterprise wide  community model  in a large company. He started with the most important question What is the commercial value of adding social ? Then he talked about the success of Community of Practices deployment and outlined some key use cases including the relevant measures to proof the ROI of the investment. Examples:  Community of practice -> measure: systematic collection of value stories  Self-service website  -> measure: based on representative models Optimizing asset inventory - > measure: Actual counts  This use case was particular interesting.  It is a crowd sourced spending/saving of infrastructure model.  User can cancel IT services they don't need (as example Software xx).  5% of the saving goes to social responsibility projects. The John outlined some  best practices on how to address the WIIFM (What's In It For Me) question of the individual users:  - change from hierarchy to graph -  working out loud = observable work + narrating  your work  - add social skills to career objectives - example: building a purposeful social network course/training as part of the job development curriculum And last but not least John gave some important tips on how to get senior management buy-in by establishing management sponsored division level collaboration boards which defines clear uses cases and measures. This divisional use cases are then implemented using a common social platform.  Thanks John - I learned a lot from your presentation!   Quotes from the Audience: Ana Silva ?@AnaDataGirl #sbf12 what's in it for individuals at Deutsche Bank? Shapping their reputations in a big org says @johnstepper #e20Ana Silva ?@AnaDataGirl Any reason why not? MT @magatorlibero #sbf12 is Deutsche B. experience on applying social inside company applicable to Italian people? Oscar Berg ?@oscarberg Your career is not a ladder, it is a network that opens up opportunities - @johnstepper #sbf12 Oscar Berg ?@oscarberg @johnstepper: Institutionalizing collaboration is next - collaboration woven into the fabric of daily work #sbf12 Ana Silva ?@AnaDataGirl #sbf12 @johnstepper talking about how Deutsche Bank is using #socbiz to build purposeful CoP & save money

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >