Search Results

Search found 16078 results on 644 pages for 'oracles social services'.

Page 601/644 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • swapping or thrashing with vast amounts of unmapped pagecache

    - by Marco
    EDIT: I noticed that this is more appropriate for superuser.com, I apologize. I don't know how to delete this question. I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Trouble connecting a Ubuntu system to IPv6 tunnel over NAT

    - by John Millikin
    I'm trying to set up an IPv6 tunnel, via Hurricane Electric's tunnel-broker service. I've configured my system using their example commands: # $ipv4a = tunnel server's IPv4 IP # $ipv4b = user's IPv4 IP # $ipv6a = tunnel server's side of point-to-point /64 allocation # $ipv6b = user's side of point-to-point /64 allocation ip tunnel add he-ipv6 mode sit remote $ipv4a local $ipv4b ttl 255 ip link set he-ipv6 up ip addr add $ipv6b dev he-ipv6 ip route add ::/0 dev he-ipv6 And have configured my desktop to be in my NAT router's DMZ. The router is running Tomato firmware. But I can't ping any IPv6 services: $ ping6 -I he-ipv6 '2001:470:1f04:454::1' PING 2001:470:1f04:454::1(2001:470:1f04:454::1) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes From 2001:470:1f04:454::2 icmp_seq=1 Destination unreachable: Address unreachable From 2001:470:1f04:454::2 icmp_seq=2 Destination unreachable: Address unreachable I can ping my local address: $ ping6 -I he-ipv6 '2001:470:1f04:454::2' PING 2001:470:1f04:454::2(2001:470:1f04:454::2) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes 64 bytes from 2001:470:1f04:454::2: icmp_seq=1 ttl=64 time=0.037 ms 64 bytes from 2001:470:1f04:454::2: icmp_seq=2 ttl=64 time=0.039 ms I don't know much about routing, but results I found online suggested the output of ip -6 route and ip addr could be useful: $ ip -6 route 2001:470:1f04:454::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev virbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 default dev he-ipv6 metric 1024 mtu 1480 advmss 1420 hoplimit 4294967295 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:1c:c0:a1:98:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1 inet6 fe80::21c:c0ff:fea1:98b2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 36:4c:33:ab:0d:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::344c:33ff:feab:dc6/64 scope link valid_lft forever preferred_lft forever 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:76:62:6e:65:74 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 7e:29:5e:7c:ba:93 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 7: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN link/sit 24.130.225.239 peer 72.52.104.74 inet6 2001:470:1f04:454::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1882:e1ef/128 scope link valid_lft forever preferred_lft forever

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Error connecting to Sonicwall L2TP VPN from iPad/iPhone

    - by db2
    A client has a Sonicwall Pro 2040 running SonicOS 3.0, and they'd like to be able to use the L2TP VPN client from their iPads to connect to internal services (Citrix, etc). I've enabled the L2TP VPN server on the Sonicwall, made sure to set AES-128 for phase 2, and set up the configuration on a test iPad with the appropriate username, password, and pre-shared key. When I attempt to connect, I get some rather cryptic error messages in the log on the Sonicwall: 2 03/29/2011 12:25:09.096 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 3 03/29/2011 12:25:09.096 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 4 03/29/2011 12:25:07.048 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 5 03/29/2011 12:25:07.048 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 The console log on the iPad looks like this: Mar 29 13:31:24 Daves-iPad racoon[519] <Info>: [519] INFO: ISAKMP-SA established 10.10.130.7[500]-[WAN IP address redacted][500] spi:5d705eb6c760d709:458fcdf80ee8acde Mar 29 13:31:24 Daves-iPad racoon[519] <Notice>: IPSec Phase1 established (Initiated by me). Mar 29 13:31:24 Daves-iPad kernel[0] <Debug>: launchd[519] Builtin profile: racoon (sandbox) Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] INFO: initiate new phase 2 negotiation: 10.10.130.7[500]<=>[WAN IP address redacted][500] Mar 29 13:31:25 Daves-iPad racoon[519] <Notice>: IPSec Phase2 started (Initiated by me). Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:55 Daves-iPad pppd[518] <Notice>: IPSec connection failed Does this offer any clues as to what's going wrong?

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • Communication software wanted: email, sms, IM, phone calls [closed]

    - by user63835
    I am searching for a software solution that integrates / unifies my communication. I use email, instant messaging, SMS and phone. I would like to get all emails, SMS, instant messaging dialogs and meta-data about phone calls into one application. Important is that I can access all past communication with one application. There should be a global address-book to map the communication data to persons or organizations. I want all the communication data in one place to access and backup it easily. The software solution is not required to be multi-user application or server application. It is just for one user (me) only, but server or multi-user applications are not excluded. I may run it on a server hardware. It should run on Linux (Lubuntu / Ubuntu prefered). Free and OpenSource software is prefered. It would be nice if I could perform new communication (like writing a new email, sms, etc.) with one application, but that is not a must have requirement. I could also work with different applications dedicated for different types of communication like IM-application for IM and email-application for email, if all that communication data from the specialized applications will be delivered to one single place where I can access and backup it. I have an android phone and currently I am using Google contacts as the address-book. In the long term this may change, to get back the control over my data. I did some Internet search but did not find a nice solution, yet. If I am looking for unified messaging and unified communication, am I on the right track? The current Thunderbird version has IM functionality integrated. Did not try it, yet. For SMS it may be possible to use an app to send every SMS (incoming and outgoing) as an email, but I am not sure if those SMS-emails can be mapped to an address-book contact. I don't remember exactly, but isn't there a Google android app (I think Google voice) integrating SMS into google services? But in Germany this function has not been released, yet. Maybe a groupware solution would solve the requirements, but I don't have much experience with it. As communication possibilites are groing, I am woundering that there seems to be such a big gap of solutions. I can't believe I am the only one who would like solution, better integrating all the communication channels more easily. If you know a software solution that solves these requirements (partly) I would be glad if you tell me about it. Thanks in advance.

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • Forwarding udp ports iptables packets "lost"?

    - by Dindihi
    I have a Linux router (Debian 6.x) where i forward some ports to internal services. Some tcp ports (like 80, 22...) are OK. I have one Application listening on port 54277udp. No return is coming from this app, i only get Data on this port. Router: cat /proc/sys/net/ipv4/conf/all/rp_filter = 1 cat /proc/sys/net/ipv4/conf/eth0/forwarding = 1 cat /proc/sys/net/ipv4/conf/ppp0/forwarding = 1 $IPTABLES -t nat -I PREROUTING -p udp -i ppp0 --dport 54277 -j DNAT --to-destination $SRV_IP:54277 $IPTABLES -I FORWARD -p udp -d $SRV_IP --dport 54277 -j ACCEPT Also MASQUERADING internal traffic to ppp0(internet) is active & working. Default Policy INPUT&OUTPUT&FORWARD is DROP What is strange, when i do: tcpdump -p -vvvv -i ppp0 port 54277 I get a lot of traffic: 18:35:43.646133 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.652301 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.653324 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.655795 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.656727 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 18:35:43.659719 IP (tos 0x0, ttl 57, id 0, offset 0, flags [DF], proto UDP (17), length 57) source.ip > own.external.ip..54277: [udp sum ok] UDP, length 29 tcpdump -p -i eth0 port 54277 (on the same machine, the router) i get much less traffic. also on the destination $SRV_IP there are only a few packets coming in, but not all. INTERNAL SERVER: 19:15:30.039663 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.276112 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 19:15:30.726048 IP source.ip.52394 > 192.168.215.4.54277: UDP, length 16 So some udp ports are "ignored/dropped" ? Any idea what could be wrong? Edit: This is strange: The Forward rule has data packets, but the PREROUTING rule has 0 packets... iptables -nvL -t filter |grep 54277 Chain FORWARD (policy DROP 0 packets, 0 bytes) 168 8401 ACCEPT udp -- * * 0.0.0.0/0 192.168.215.4 state NEW,RELATED,ESTABLISHED udp dpt:54277 iptables -nvL -t nat |grep 54277 Chain PREROUTING (policy ACCEPT 405 packets, 24360 bytes) 0 0 DNAT udp -- ppp0 * 0.0.0.0/0 my.external.ip udp dpt:54277 state NEW,RELATED,ESTABLISHED to:192.168.215.4

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • SQL server 2008 R2 installation error

    - by Sonia
    I have a windows 7,32 bit laptop. I am the administrator with all permissions. when I click on the SQL server 2008R2 set up file,it says : "SQL server set up has encountered the following error:Failed to retreive data for this request" click on OK. I have uninstalled all the components of SQL from control panel. I used Windows installer clean up to remove the files(which I must have not done ),but still no go. The summary.txt log says: Overall summary: Final result: Failed: see details below Exit code (Decimal): 847168662 Exit facility code: 638 Exit error code: 50326 Exit message: Failed to retrieve data for this request. Start time: 2012-05-25 14:59:15 End time: 2012-05-25 15:00:09 Requested action: RunRules Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.0.5500.0%26EvtType%3d0xEF814B06%400x92D13C14 Machine Properties: Machine name: EWAN-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: Australia OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation location: c:\385030d65c6ff61fb9\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: RunRules CONFIGURATIONFILE: FEATURES: HELP: False INDICATEPROGRESS: False INSTANCENAME: QUIET: False QUIETSIMPLE: False RULES: GLOBALRULES,SqlUnsupportedProductBlocker,PerfMonCounterNotCorruptedCheck,Bids2005InstalledCheck,BlockInstallSxS,AclPermissionsFacet,FacetDomainControllerCheck,SSMS_IsInternetConnected,FacetWOW64PlatformCheck,FacetPowerShellCheck X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\ConfigurationFile.ini Detailed results: Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: The rule result report file is not available. Exception summary: The following is an exception stack listing the exceptions in outermost to innermost order Inner exceptions are being indented Exception type: Microsoft.SqlServer.Management.Sdk.Sfc.EnumeratorException Message: Failed to retrieve data for this request. Data: HelpLink.ProdName = Microsoft SQL Server HelpLink.BaseHelpUrl = http://go.microsoft.com/fwlink HelpLink.LinkId = 20476 DisableWatson = true Stack: at Microsoft.SqlServer.Setup.Chainer.Workflow.PendingActions.InvokeActions(WorkflowObject metaDb, TextWriter loggingStream) at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionEngine.RunActionQueue() at Microsoft.SqlServer.Setup.Chainer.Workflow.Workflow.RunWorkflow(HandleInternalException exceptionHandler) at Microsoft.SqlServer.Chainer.Setup.Setup.RunRequestedWorkflow() at Microsoft.SqlServer.Chainer.Setup.Setup.Run() at Microsoft.SqlServer.Chainer.Setup.Setup.Start() at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Inner exception type: Microsoft.SqlServer.Configuration.Sco.ScoException Message: Attempted to perform an unauthorized operation. Data: WatsonData = HKEY_LOCAL_MACHINE@SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Microsoft SQL Server 10 Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Configuration.Sco.SqlRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Discovery.RegistryKeyExistsPropertyValueProvider.GetPropertyValue(Object[] context) at Microsoft.SqlServer.Discovery.DiscoveryEnumObject.GetPropertyValueFromProvider(IPropertyValueProvider propertyValueProvider, String machineName, Object[] context) at Microsoft.SqlServer.Discovery.ObjectInstanceSettings.IsObjectFound(String machineName, String idFilter) at Microsoft.SqlServer.Discovery.Product.FilterObjectSet(ArrayList objects, String idFilter) at Microsoft.SqlServer.Discovery.Product.GetData(EnumResult erParent) at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData() at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData(Request req, Object ci) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.GetData(Object connectionInfo, Request request) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.Process(Object connectionInfo, Request request) Inner exception type: System.UnauthorizedAccessException Message: Attempted to perform an unauthorized operation. Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) Ineed to install SQL server 2008 R2 for one of the company softwares to work. Any immediate help will be greatly appreciated. Thanks Sonia

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • iftop Shows Lots of Mysterious Connections - Not Showing in netstat

    - by HOLOGRAPHICpizza
    I've just stopped all pretty much all services except sshd on my server (Ubuntu Server 10.04), and when I run iftop I get output that looks like this: 12.5Kb 25.0Kb 37.5Kb 50.0Kb 62.5Kb mqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqq flash.gateway.2wire.net:ssh <=> 172.16.1.151:60405 1.75Kb 1.54Kb 2.22Kb flash.gateway.2wire.net:21095 <=> 69.127.29.20:32582 536b 107b 27b flash.gateway.2wire.net:21095 <=> 190.164.122.134:13557 0b 105b 26b flash.gateway.2wire.net:21095 <=> 79.165.212.195:45138 0b 105b 26b flash.gateway.2wire.net:21095 <=> 151.42.15.151:9031 0b 72b 18b flash.gateway.2wire.net:21095 <=> 88.185.120.179:51413 0b 0b 49b flash.gateway.2wire.net:21095 <=> 178.120.152.97:25924 0b 0b 29b flash.gateway.2wire.net:21095 <=> 109.110.217.77:27868 0b 0b 26b flash.gateway.2wire.net:21095 <=> 84.13.201.90:16509 0b 0b 26b flash.gateway.2wire.net:21095 <=> 171.7.125.224:11777 0b 0b 26b flash.gateway.2wire.net:21095 <=> 115.177.164.170:21360 0b 0b 26b flash.gateway.2wire.net:21095 <=> 50.88.126.18:25540 0b 0b 25b flash.gateway.2wire.net:21095 <=> 223.206.230.163:13431 0b 0b 25b flash.gateway.2wire.net:21095 <=> 78.144.187.26:24515 0b 0b 25b flash.gateway.2wire.net:21095 <=> 83.20.61.211:27572 0b 0b 25b flash.gateway.2wire.net:21095 <=> 82.134.151.42:18448 0b 0b 18b flash.gateway.2wire.net:21095 <=> 126.117.95.247:25316 0b 0b 18b flash.gateway.2wire.net:21095 <=> 116.202.65.230:9044 0b 0b 18b flash.gateway.2wire.net:21095 <=> 88.120.63.205:51413 0b 0b 17b qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cumm: 61.6KB peak: 8.00Kb rates: 1.59Kb 1.38Kb 2.04Kb RX: 18.4KB 1.64Kb 696b 549b 640b TOTAL: 80.0KB 9.64Kb 2.27Kb 1.92Kb 2.66Kb This is the first part (not the unix socket part) of the output of netstat -a: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:55677 *:* LISTEN tcp 0 0 flash.gateway.2wire:ssh 172.16.1.151:60405 ESTABLISHED tcp 0 48 flash.gateway.2wire:ssh 172.16.1.151:60661 ESTABLISHED tcp6 0 0 [::]:ssh [::]:* LISTEN udp 0 0 *:37790 *:* What could all those strange connections on port 21095 be? And why would they not show up in netstat?? Any advice would be greatly appreciated.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >