Search Results

Search found 22298 results on 892 pages for 'default'.

Page 653/892 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Mystery undeletable file

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf. it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • Citrix Access Gateway with Citrix Receiver

    - by vm370
    I'm currently using a Citrix Access Gateway with firmware 5.0.4 to provide access over an SSL-VPN to an isolated environment, which is connected to over the Citrix Access Gateway Client, which is delivered by the device by default. However we encountered different problems, e.g. that it's somehow not possible to get it out of the autostart (not registered as a service or in the autostart?!) and it killed the Cisco VPN Client, which is used in the company and unfortunately cannot be replaced. The Cisco client can also just be used again after a procedure with cleaning the registry from all CAG Client remains, which requires a lot of effort. Because of that, I'd like to check if there is an alternative to this client, since is this is a real pain... Unfortunately I couldn't find a way to use the Receiver with the CAG yet, but if you have any resources on how to build this workaround, I'd be very happy. Thanks a lot in advance UPDATE: If there are other alternatives I'd be even more happy, since using the Receiver would also mean that there is an issue with the ICA-Client, which is also used in our environments. From my experience, the Receiver and the ICA-Client are also no good friends...

    Read the article

  • How do I add additional parameters to query string of a Firefox Search Plugin?

    - by Goto10
    I have just installed the DuckDuckGo add-on in Firefox 11.0, running on XP SP 3. I would like to add additional parameters to the query string. However, any changes I make are not reflected in the query string when doing a search. I found the duckduckgo.xml file at C:\Documents and Settings\User Name\Application Data\Mozilla\Firefox\Profiles\Profile Name.default\searchplugins. I opened it up with Notepad++ and added the line for kl=uk-en: <SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/"> <os:ShortName>DuckDuckGo</os:ShortName> <os:Description>Search DuckDuckGo (SSL)</os:Description> <os:InputEncoding>UTF-8</os:InputEncoding> <os:Image width="16" height="16">data:image/x-icon;base64, -Removed to shorten-</os:Image> <os:Url type="text/html" method="GET" template="https://duckduckgo.com/"> <os:Param name="q" value="{searchTerms}"/> <os:Param name="kl" value="uk-en"/> </os:Url> </SearchPlugin> However, the kl=uk-en parameter does not appear in the query string when searching (despite several Firefox restarts).

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • EEE PC Keyboard malfunctioning - Ctrl key "sticks" after 10 seconds

    - by DWilliams
    I was given a EEE PC belonging to a friend of a friend to fix. The keyboard did not appear to work at all. I spent a while testing out various things, blowing the keyboard out, checking for damage, and so on. Nothing appeared to be physically wrong. At first I noticed that the keyboard appeared to work just fine for 10 seconds (on average, sometimes more sometimes less) after being powered on. It had been restored to the factory default xandros installation with no user set up, so I couldn't get in to mess with things since I couldn't type to make a user. I made an ubuntu live USB to boot it from, and managed to get the boot order changed to boot from USB in the ~10 seconds of working keyboard I had (I don't think I've ever had to rush around BIOS menus that quickly). After I got Ubuntu up on it, I played around a bit more and determined that apparently the ctrl key is stuck down (not literally, but it's on all the time). If I open gedit, pressing the "o" key brings the open dialog, "s" opens the save dialog, and all other behaviour you would expect to see if you were holding down the control key. The only exception that I noticed is the "9" and "0" keys. They function normally. Figuring that out I made a xandros user with a name/password consisting of 9's and 0's. I couldn't find any options in Xandros that could potentially be helpful. I'm not familiar with EEE PCs. Is it safe to assume that the keyboard is simply dead or could there be another problem? I don't want to purchase another keyboard for him if that isn't going to fix the problem. The netbook doesn't show any obvious signs of damage but the owner is a biker and very often has it with him on the road so it's been subjected to a good bit of vibration.

    Read the article

  • on debian, lighttpd apache2 using 80 port, lighttpd throws :address already use error

    - by user1960581
    I bought the linode(linode.com) server the other day. I've been trying to run lighttpd and apache2 at the same port, using lighttpd for static files. As linode is only providing ONE ipv4 address, I tried to bind lighttpd on the ipv6 address. That's where I got the same error each and very single time: can't bind to port [ipv6] 80 Address already in use. I tried bind the ipv4 address. Everything worked. Please help me, this is driving me nuts for the last two days. my lighttpd.conf file:(the ipv6 address isn't true) server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", # "mod_rewrite", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" server.port = 80 server.bind = "2600:3c02::0000" server.use-ipv6 = "enable" #server.pid-file = "/var/run/lighttpd.pid" index-file.names = ( "index.php", "index.html", "index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" ) # default listening port for IPv6 falls back to the IPv4 port #include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ### ipv6 ### $SERVER["socket"] == "[2600:3c02::0000]:80" { # accesslog.filename = "var/log/lighttpd/ipv6/access.log" # server.document-root = "/var/www/" # server.error-handler-404 = "/index.php?error=404" } and the error message: can't bind to port, 2600:3c02::0000 Address already in use.

    Read the article

  • Can't view other computers on Network

    - by Darkart
    Systems: My Machine: Windows 7 Ultimate connected through ethernet into router Her Machine--Other Machine: Windows 7 Ultimate connected through wireless Router:F5D8236-4 N Wireless Router Version 1 Firmware: 2.01.03 (Apr 28 2009) ISP: Comcast Problem: I can not view the "Other Machine" on the network at all. I opened command prompt and ran net view and saw the pc name. I tried pinging the pc and it times out. Went inside the router and tried viewing the computer on the DHCP list and it can not be seen. I restored the router back to default settings and firmware and completely reset the modem and router, and created home group. I went to the other machine to configure home group settings and made sure that both PC's had identical settings. She was able to see my machine but I could not see hers. I restarted both machines and now we cant see each other at all. Also her PC ("Other Machine") had exclamation mark in the wireless icon but was connected just fine. There is no firewalls on currently or anti-virus enabled, and still can not see each other. Right now I am checking for updated drivers for the wireless card, but my question is could it be the router or something hardware related? I have went through all the settings in the Home group and visited most FAQ's and still no luck. Also as it stands I can not view her machine inside the router DHCP Client List :(

    Read the article

  • Different routing rules for a particular user using firewall mark and ip rule

    - by Paul Crowley
    Running Ubuntu 12.10 on amd64. I'm trying to set up different routing rules for a particular user. I understand that the right way to do this is to create a firewall rule that marks the packets for that user, and add a routing rule for that mark. Just to get testing going, I've added a rule that discards all packets as unreachable: # ip rule 0: from all lookup local 32765: from all fwmark 0x1 unreachable 32766: from all lookup main 32767: from all lookup default With this rule in place and all firewall chains in all tables empty and policy ACCEPT, I can still ping remote hosts just fine as any user. If I then add a rule to mark all packets and try to ping Google, it fails as expected # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 # ping www.google.com ping: unknown host www.google.com If I restrict this rule to the VPN user, it seems to have no effect. # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 -m owner --uid-owner vpn # sudo -u vpn ping www.google.com PING www.google.com (173.194.78.103) 56(84) bytes of data. 64 bytes from wg-in-f103.1e100.net (173.194.78.103): icmp_req=1 ttl=50 time=36.6 ms But it appears that the mark is being set, because if I add a rule to drop these packets in the firewall, it works: # iptables -t mangle -A OUTPUT -j DROP -m mark --mark 0x01 # sudo -u vpn ping www.google.com ping: unknown host www.google.com What am I missing? Thanks!

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Weird routing issue

    - by Joel Coel
    I'm having some weird internet problems on campus. I know it's something simple, but it's a case where I need another set of eyes. I think I can explain the problem best by posting a tracert: Tracing route to google.com [74.125.45.147] over a maximum of 30 hops: 1 3 ms 3 ms 3 ms 192.168.8.1 2 1 ms 1 ms 1 ms elissaemily-pc.york.edu [192.168.10.5] 3 2 ms 2 ms 2 ms rrcs-76-79-19-33.west.biz.rr.com [76.79.19.33] 4 31 ms 3 ms 2 ms ge-1-1-0.lnclne00-mx41.neb.rr.com [76.85.220.109] 5 20 ms 17 ms 17 ms ge-7-3-0.chcgill3-rtr1.kc.rr.com [76.85.220.137] 6 20 ms 20 ms 19 ms ae-5-0.cr0.chi30.tbone.rr.com [66.109.6.112] 7 19 ms 19 ms 24 ms ae-1-0.pr0.chi10.tbone.rr.com [66.109.6.155] 8 26 ms 24 ms 24 ms 74.125.48.109 9 23 ms 24 ms 21 ms 216.239.46.246 10 39 ms 39 ms 55 ms 209.85.242.215 11 39 ms 39 ms 39 ms 209.85.254.243 12 39 ms 40 ms 96 ms 209.85.253.145 13 39 ms 39 ms 39 ms yx-in-f147.1e100.net [74.125.45.147] Trace complete. Note the second entry in there. Not only is the host name a student's computer, but the ip address doesn't exist. Dhcp shows that host as having a different address and you can't ping any 192.168.10.5. Yet somehow it's routing packets for us (and not very well, either — things are slow right now). The basic network routing table looks like this: Destination Subnet Mask Gateway --------------------------------------- Default Route -- 10.1.1.5 (our firewall) 10.0.0.0 255.0.0.0 -- 192.168.8.0 255.255.252.0 --

    Read the article

  • NGINX SSI Not working

    - by Mike Kelly
    I'm having trouble getting SSI to work on NGINX. You can see the problem if you hit http://www.bakerycamp.com/test.shtml. Here is the contents of that file: <!--# echo hi --> If you hit this in a browser, you see the SSI directive in the content - so apparently NGINX is not interpreting the SSI directive. My NGINX config file looks like this: server { listen 80; server_name bakerycamp.com www.bakerycamp.com; access_log /var/log/nginx/bakerycamp.access.log; index index.html; root /home/bakerycamp.com; location / { ssi on; } # Deny access to all hidden files and folders location ~ /\. { access_log off; log_not_found off; deny all; } } I did not build NGINX from sources but installed it using apt-get. I assume it has the SSI module (since that is default) but perhaps not? Should I just bite the bullet and rebuild from sources? Is there anyway to tell if the installed NGINX supports SSI and my config is just wrong?

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • Enabling Hyper-V Integrated Services Time Sync Services versus Internet Time Synchronization

    - by cpuguru
    Should I deselect the "Synchronize with an Internet Time Server" checkbox under the VM's "Date and Time - Internet Time Settings" tab if the "Time Synchronization Service" for a Hyper-V-based Virtual Machine is enabled? One of the Integration Services that Hyper-V provides is the Time Synchronization Service, which can be enabled/disabled by going to a VM's Settings-Integration Services setting in the Management section. I believe this is checked by default. When you install a Windows Server 2008 OS in a VM on the Hyper-V server, it comes with the "Synchronize with an Internet Time Server" option set, pointing to "time.windows.com". I'd think that if the parent Hyper-V server is set to one time server, and the child VM is pointing to a different time server, there would be a momentary blip if the two are not spot on with their times when the synchronization services run. So the question is, which time sync service should I use? I'm assuming not both. And what is the advantage of one over the other? Note: This question assumes that the machines are not joined to a domain. If they were, the machines would also try to update their time against the domain controller with the primary domain controller role too, right? Thanks!

    Read the article

  • Blocking requests from specific IPs using IIS Rewrite module

    - by Thomas Levesque
    I'm trying to block a range of IP that is sending tons of spam to my blog. I can't use the solution described here because it's a shared hosting and I can't change anything to the server configuration. I only have access to a few options in Remote IIS. I see that the URL Rewrite module has an option to block requests, so I tried to use it. My rule is as follows in web.config: <rule name="BlockSpam" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{REMOTE_ADDR}" pattern="10\.0\.146\.23[0-9]" ignoreCase="false" /> </conditions> <action type="CustomResponse" statusCode="403" /> </rule> Unfortunately, if I put it at the end of the rewrite rules, it doesn't seem to block anything... and if I put it at the start of the list, it blocks everything! It looks like the condition isn't taken into account. In the UI, the stopProcessing option is not visible and is true by default. Changing it to false in web.config doesn't seem to have any effect. I'm not sure what to do now... any ideas?

    Read the article

  • Routing multiple static IPs from ISP at the cable modem?

    - by Jakobud
    I'm taking over IT responsibilities for a previous IT guy. We have a 50mb cable modem connection from Comcast along with 5 static IP addresses: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 We are in the process of replacing our firewall machine. Currently the firewall box is the only thing connected to the cable modem. However the cable modem has multiple ethernet ports on it, similarly to a router. I have assembled a new firewall machine and its time to start testing and configuring it. So that means that I also need it plugged into the cable modem (remember it has multiple ethernet ports on it). So now with multiple computer plugged into the cable modem, how does the cable modem know where to route the traffic? If some request on the internet is made to XXX.XXX.XXX.181, which goes to our cable modem, how does the cable modem know which connected computer that traffic is supposed to be sent? Looking at the web interface for the cable modem, there doesn't seem to be anything special setup on it with regards to routing or NATing IP addresses. Is that because when there is only one computer connected to the modem, all traffic is sent to it by default? Now that I am going to (temporarily) have multiple computers plugged into the cable modem, do I need to specify routing or NAT rules on the modem itself? I am going to speak to Comcast about this next, but I figured I'd ask here first just so I can get a better grasp on how this type of thing generally plays out.

    Read the article

  • Munin node not listing any plugins on new Fedora 14 installation

    - by Dave Forgac
    I have just installed munin-node from the base repo on Fedora 14 and then started it. I found that my munin server is not able to collect data from this node so I tried connecting via telnet to test. When connecting via telnet I see that no plugins are listed: [dave@host ~]# telnet localhost 4949 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # munin node at host.example.com list quit Connection closed by foreign host. [dave@host ~]# I did not modify anything after the installation. The munin-node.conf is allowing connections from 127.0.0.1 and the default set of plugins in /etc/munin/plugins/ are symlinked to the plugins in /usr/share/munin/plugins/. Here is the working output of the telnet test of the 'list' command should look like (this is on a Fedora 13 host): [dave@www ~]$ telnet localhost 4949 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # munin node at www.example.com list apache_accesses apache_processes apache_volume cpu df df_inode entropy forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts iostat iostat_ios irqstats load memory munin_stats mysql_ mysql_bytes mysql_innodb mysql_queries mysql_slowqueries mysql_threads netstat open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat yum quit Connection closed by foreign host. [dave@www ~]$ Edited to show output of munin-node-configure: [root@host ~]# munin-node-configure Plugin | Used | Extra information ------ | ---- | ----------------- acpi | no | amavis | no | ... http_loadtime | no | if_ | yes | eth1 eth0 if_err_ | yes | eth0 eth1 ifx_concurrent_sessions_ | no | interrupts | yes | ... uptime | yes | users | yes | varnish_ | no | vserver_resources | no | yum | yes | zimbra_ | no | Any suggestions on what to check next?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >