Search Results

Search found 11665 results on 467 pages for 'ajax push'.

Page 362/467 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • BIOS and Windows cannot detect CDROM device

    - by eman
    Hello! I have a HL-DT-ST RW/DVD GCC-4521B dvdrom device and a big problem. Some days ago everything worked fine. A friend installed some software and then the drives in winxp has been marked as corrupt. I uninstalled the software, but still corrupt drives. The next step I have done was running the current software GCC-4521B101(E).exe. When I ran this software again, the drives was automatically updated, but still marked as corrupt (in the Device Manager), even if I did a reboot. And then the big mistake: once more I tried to run this software, but during the update process, the machine restarted and boom! The DVDROM device doesn't work anymore. The led doesn't blink and if I push the eject button, nothing happens. Also bios and winxp doesn't recognize the optical drive. Then I plugged an other optical drive and it worked, but my old drive seems to be dead. So, what happened and how to solve this problem? Please help. Regards!

    Read the article

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Privacy, VPN and routers

    - by user123189
    Ever since this ACTA push-up the things are starting to heat up around torrents and privacy. I am using Tribler now, but this is not secure enough for me. Not enough privacy. I've been using in the past a swedish VPN PPTP connection. What I observed is that, when the VPN connection was down, Internet traffic wasn't cut off, rather the downloads were continuing, this time with my real IP, wearing off my protection. 1st : How to enforce a VPN connection that will cut all traffic when down? That is, the moment the connection is down, all internet traffic should cease as if I'd pull the network plug out. 2nd: Is PPTP good enough or should I ask for SSTP or IKEV2 ? 3rd: Should I disable IPv6 ? Is VPN no longer private if I keep IPv6 active? I 'heard some stuff' about dual vpn routers to be able to improve privacy; but nothing more about how to configure one for such a task. 4th: Is there any kind of "black box" hardware equipment that can be used in hiding IP, encrypting traffic and so on ?

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • Is there a standard method for assigning nameservers to servers in a Windows domain?

    - by HopelessN00b
    As the title asks, I'm wondering if there's any standard or "best practice" for how to actually assign nameservers (DNS) and manage the nameserver configuration for client servers on a Windows domain. I'm talking about the setting circled in the below image, in case the language of the question is not clear enough: This is for a large, multi-site environment, where ideally/hopefully all servers point at their site's Domain Controller as the primary DNS server, and a DNS server at a different site as the secondary DNS server. For simplicity's sake, we can say that the secondary server would be the Domain Controller at the home site for everyone, and there are no tertiary DNS servers (even though that's not actually the case). Try as I might, I can't seem to find a GPO setting for this (at least on FL 2003 R2, the Computer Configuration - Administrative Templates - Network - DNS Client - DNS Servers GPO is Supported on: Windows XP Professional only), and I find it rather hard to believe that the "best"/"standard" solution would therefore be either scripting up something to apply the DNS settings per site, or using a DHCP server to push those configurations out to the other servers via the DHCP Scope Options. So, is there a standard way of managing this configuration? (That's hopefully not "a script" or "DHCP Scope Option.)

    Read the article

  • openVPN as a way to connect to a LAN by another client, different from server

    - by Einar
    Setup: one LAN handled by a router without a publicly available IP address but without any outbound connection restrictions ("target LAN"); a separate server publicly reachable from the Internet ("gateway"). I am trying to set up openVPN so that a third client can connect to the "gateway" and access the "target LAN". As the router of "target LAN" is not reachable from the Internet directly, it connects to the gateway itself via openVPN as well. The problem is how to handle routing. The LAN router has two network interfaces (for the outside network and the LAN itself). In openVPN (the server on the gateway) I set client-to-client and push "route 192.168.10.0 255.255.255.0" but I assume this would be horribly wrong (it actually messed up the routing on the LAN router until I killed openVPN). openVPN is not using bridging, is configured via tun. Other config details from the server server 10.8.0.0 255.255.255.0 client-config-dir ccd route 192.168.10.0 255.255.255.0 And the client file in ccd is iroute 192.168.10.0 255.255.255.0 What can be adjusted to ensure that a third client can connect through openVPN and access the LAN mentioned earlier?

    Read the article

  • Screen randomly goes blue/black/white

    - by FubsyGamer
    Problem Randomly, while using my computer, the monitor goes dark grey/almost black, or it goes white with faint grey vertical lines, or it goes blue with black vertical lines. It's as if the computer powers off. People tell me I sign out of Skype, Spotify stops playing when it happens, etc. When I look at the tower, it doesn't seem like it's off at all. Nothing changes, fans are spinning, lights are on, etc. If you were only looking at the tower, you'd never know there was a problem The only way I can get it to come back up is to push and hold the power button and turn it off, then back on This never happens while I'm playing video games. I've done 5-6 hour sessions of League of Legends, and it doesn't do anything When I'm just browsing the web, reading email, checking Reddit, etc, it happens all the time. It can happen multiple times in a session, it usually takes only about 5 minutes from the time I start browsing to when the computer crashes This started happening after I moved to a new apartment (this has to be relevant somehow, it was not happening where I lived before) There is nothing in the crash logs or event logs System Specs i5 2500k CPU AMD Radeon 6800 GPU Gigabyte z68a-d3h-b3 motherboard WD VelociRaptor 1 TB HDD Screenshots Device manager About screen Things I have tried I was getting a WMI Error in my event logs, but I fixed it using Microsoft's fix, KB 2545227 I was using Windows 8. I wiped the HDD and downgraded to Windows 7 64 bit I took out the video card and used a can of air to totally clean out the video card, all fans, and the inside of the computer in general. I made sure all of the video card pins were fine, then reconnected it I tried to update my motherboard BIOS, but anything I downloaded from Gigabyte was only for 32 bit machines, not 64. I don't even know how to tell what my motherboard BIOS is at right now I am using a power strip, and anything else connected to it works just fine If I re-seat the monitor cable while this is happening, nothing changes Please, help me. I've been battling this for several weeks now, and it's so frustrating it makes me not even want to use the computer.

    Read the article

  • Getting old bluetooth headset out of standby on Windows 7

    - by luagh45
    I think this is the problem, and if not, I'm open to suggestions. I have an old Jabra BT200 that I used to use on my phone. When a phone call was coming in it would beep using its own noises (meaning the phone never rang inside the headset) and then I could push the 'answer/hang up' button and sound and mic would start working. I have now paired it with Windows 7, and it looks good. Under the playback menu I have 'Bluetooth Hands free Audio / Jabra BT200 (Mono Audio) / Ready', and under the recording menu I have 'Bluetooth Audio Input Device / Jabra BT200 (Mono Audio) / Ready'. However when I try to test the speakers Windows sends a sound, but I never hear it, and when I talk in the mic, Windows never hears me. If I right click either the Bluetooth mic or speakers there's an option to 'Connect', but it's grayed out and I cannot click it. As the final piece of knowledge I have, my headset blinks once every 3 seconds when it's in standby and I can't get that to change. If everything was working it should blink once every second at which point I think all of my problems would be fixed. Hence my issue: I can't seem to get my headset out of standby. On my headset I've tried sending it test noises and then pressing the 'Answer' button, but still nothing. The headset beeps when I press it, so it works, it just doesn't ever come out of standby. Is there maybe some way to trick my headset into thinking it's getting a phone call from my computer?

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Getting a "CPU over temperature error", but temperatures seem to be normal

    - by Luis Parker
    I built a PC with the following parts: CPU: i5-2500 Motherboard: Asus P8H67-M EVO rev 3.0 RAM: G-Skill Ripjaws Series 8GB (2 x 4GB) DDR3 1333 Video: GTX 560-Ti 1GB I used a crappy (but functioning) old case and a 500W Powercooler supply. The rig also includes 3 HDs and a DVD-RW. Whenever I push the system mildly it resets right away (looks more like power off/power on) and gives me a "CPU over temperature error". However, BIOS always reports <65ºC for the CPU at the moment of the reset, same as Core Temp and RealTemp. This is RealTemp's log since launching BF3 until the reset (just over 1 minute, as you can see) Just to be sure I've checked the CPU cooler and re-applied the thermal paste twice, but nothing changed. I'm not overclocking at all. What am I missing here? Could it be that the old power supply is generating this error? Maybe the mobo isn't reporting temperatures correctly? I don't have a clue on how to troubleshoot this, help Thanks in advance!

    Read the article

  • How do you create an ssh key for the apache user on Redhat?

    - by Josh Smeaton
    As the question asks, how do I generate an ssh key for the user apache on Redhat? My use case, is that we have a mercurial server running under the apache user. We also have several web servers clustered that we need to log on to manually and do pulls from. Ideally, what we'd like to do is have the mercurial server push all changes to all the webservers in the cluster. To do this, we want to use ssh, as setting up http mercurial servers on each of the web servers seems like too much work, and far too heavy. What I've tried to do is the following: > sudo mkdir /var/www/.ssh > sudo chown -R apache:nobody /var/www/.ssh > su - apache -c "ssh-keygen -t rsa" This account is currently not available. I found the above commands elsewhere, but I can only assume that Redhat has differences to whatever distro was used for the above. Is there a way I can generate an ssh-key for the apache user?

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • allow spoofing when using tun

    - by Johnny
    I have a working openvpn setup with a server and a number of clients. How would i go around allowing IP spoofing through the openvpn server? (to demonstrate security concepts)? A normal ping from client to server goes through all right: root@client: hping3 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes len=40 ip=10.8.0.1 ttl=64 DF id=0 sport=0 flags=RA seq=0 win=0 rtt=124.7 ms root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes 10:17:51.734167 IP 10.8.0.6.2146 > 10.8.0.1.0: Flags [], win 512, length 0 But when spoofing a packet, it does not arrive at the openvpn server: root@client: hping3 -a 10.0.8.120 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes My current config files server.conf local X.Y.Z.P port 80 proto tcp dev tun ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun persist-local-ip status openvpn-status.log verb 3 client.conf client dev tun proto tcp remote MYHOST..amazonaws.com 80 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3

    Read the article

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • What are my options in replacing the noisy fan in my Linksys Cisco SRW2008P managed GigE switch?

    - by Fred Sobotka
    My first managed GigE switch, the Linksys SRW2008, was a dream, until it started randomly chattering on various ports. That started while I was on the road all the time, which made it take forever to diagnose, but that's a different problem. When I finally determined that the switch was bad, it was still covered by warranty by Linksys/Cisco, so I opened an RMA ticket and returned it. Unfortunately, Linksys/Cisco "upgraded" my replacement switch to a SRW2008P, which has Power over Ethernet features I never planned on using. That by itself wasn't so bad, but it's my guess that the inclusion of PoE functions in this model required a tiny, super-loud internal fan to keep everything cool. This wasn't something I wanted or asked for, but, now that I am stuck with it, I am investigating options for replacing that little internal fan with something far quieter. For example, if I attach a larger fan to the outsite of the chassis, I think it could push enough air to replace the stock fan that is currently there. Any advice on carrying this out? I have no interest in melting my switch due to insufficient ventilation.

    Read the article

  • Window too big to fit the screen!

    - by syockit
    I'm using Windows 7 on a 8.9' monitor with 1280x768 screen resolution. Using the might of arithmetics, I'm able to determine that my dpi (actually ppi) should be 167. Win7 is really helpful in that it doesn't have to restart to apply new dpi settings, unlike its predecessors (though I'd rather it applies straight away). The problem with small monitors in Windows is that when you come across windows too big to fit the screen, you can't move the title bar far above it. In X window managers I used in the past, you could alt-drag the window to anywhere you want, but in Windows, even if you alt-space and select move, it will automatically push the window back until the title bar is visible. I'm looking for a solution that either: allows me to move window freely without regard to titlebar visibility, or attach a scrollbar to existing window, or EDIT: create virtual desktops that allow me to span windows over 2 desktops, or EDIT 2: allow me to set larger virtual resolution, then pan & scan. EDIT 3: I found some progs that might do some of the above: 1) AltDrag allows me to drag, resize using alt and left/right mouse button. Neat! Best solution so far. 2) GiMeSpace Desktop Extender is supposed to allow me to scroll desktop. Didn't work. The other new version, GiMeSpace Ultimate Taskbar worked, but it destroys my Superbar, replacing it with its map.

    Read the article

  • Is there a learnable filter in Thunderbird for non-spam messages (as in Opera Mail)?

    - by Debilski
    One feature I like very much about Opera Mail is that you can have learnable filters for any purpose. So not only can you filter spam messages but also messages that your friends sent you or info mails from web platforms without having to enter each and every mail address you want to filter. It actually works quite satisfying and you can combine it with string filtering, too. It does a few mistakes in the beginning but then improves quickly after you have removed some of the false positives. However, there are a couple of drawbacks with the Opera Mail module. The filters are only ‘virtual’. So, with IMAP there is no easy way of mirroring the filtered structure back on the server and when I’m using webmail I’ll see the whole unfiltered mess in the inbox folder. Opera’s not using the OS X address book — and neither does it use ldap (which is not too important for me at the moment). So, I’m not specifically looking for a Thunderbird solution here, a way to fix things in Opera or Apple Mail should do as well. Or some other E-Mail program I don’t even know about yet. (So, to be clear, I’d like to have: OS X Address Book integration, Learnable filters for any type of filtering, Ability to push filtered folder structure to the IMAP server) But I thought like, if it is possible then most probably there would be an add-on for Thunderbird available. Any ideas?

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >