Search Results

Search found 4650 results on 186 pages for 'perfect hash'.

Page 138/186 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • The Wifi is working fine but no internet connection in Windows 8 [migrated]

    - by Ali
    I'm currently having a Problem with my msi GT70 laptop. my laptop is running windows 8 and yesterday it requested a Restart to Update. after the Restart and the update I tried to surf the net through google chrome, the WiFi connection is perfect but the page I tried to access is not loading at all, after a while it shows failure to load page. I disabled and reenabled the WiFi chip through the device manager but still no internet connection. I uninstalled and reinstalled the drivers, still no internet. I updated the Driver of the wifi, still no internet, I even went to Wifi configuration and tried to change the DNS and reset it back to automatically, still no Luck.. I'm really lost I don't know what to do, I don't want to go deep and play with Laptop DNA "aka: registry file" and screw things up. I really appreciate any help in this matter. thanks in advance. Note: I tried to access many pages but no luck. I even tried Firefox, Opera, even ie still no luck. The internet is working fine on my tablet and cellphone, except for the Laptop.

    Read the article

  • Migrate openldap users and groups

    - by user53864
    I have an OpenLDAP server running on one of my ubuntu 8.10 servers. I used command-line only for OpenLdap installation and some basic configurations, everything else I'll configure with the Webmin gui tool. I'm trying to migrate to ubuntu 10.04 and I was able to migrate all other servies, application and databases but not the ldap. I'm an ldap beginner: I have installed OpenLDAP server and client on ubuntu 10.04 server using the link and used the following command to export and import ldap users and groups To export from 8.10 server slapcat > ldap.ldif To import to 10.04 server Stop ldap and slapadd -l ldap.ldif and Start ldap Then I accessed Webmin and checked in Ldap users and groups and I could see all the users and groups of my old ldap server.Whenever I create an ldap user from the webmin(in 8.10 or 10.04) a unix user is also created with the home directory under /home. But the imported users in 10.04 from 8.10 are not present as a unix user(/etc/passwd). How could I make the ldap users available as a unix user, is there any perfect way to export and import?. I also wanted to check the ldap users from the terminal that if password is exported properly but I don't know how to access the ldap users which are not available as unix users. On 8.10, I just use su - ldapuser and it is not working in the 10.04 as unix users are not created for the exported ldap users. If every thing works fine then the CVS works as it is using ldap authentication. Anybody could help me?

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Netgear router keeps disconnecting iPhone

    - by DisgruntledGoat
    My old router (Voyager 2091) packed up so I just got a new router - a Netgear N150 model DGN1000. My laptop connects OK wirelessly, but my iPhone 4S is constantly getting "disconnected" - it has perfect wifi signal and is seemingly connected to the router, but no pages load (it says "server cannot be found"). If I disconnect manually ("forget this network") then reconnect, it works fine again for a random amount of time (usually 10-30 minutes) then I get the same problem again. I've done some searching and this appears to be a known problem - there are dozens of forum posts out there lamenting similar connection problems. The only advice I have seen is to set a specific channel under Wireless Settings on the router CP, although every forum post recommends a different channel! 1, 3, 5, 6, 11... I have tried them all for hours at a time and get the exact same problem. The firmware is up to date. Is there an actual solution for this, or do I need to get a different router just to be able to use my iPhone?

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • MS Excel 2010 - Using DSN + 32 bit drivers

    - by Kristiaan
    I need some advice as im running into a problem and so far i have been unable to find a solution. We have a set of reports developed in MS Excel that use DSN file to connect to data sources to retrieve data, these work fine on 32 / 64bit systems, however we are moving to a terminal server environment using windows 2008 R2 64Bit. The reports fail to run using the DSN's within this environment if we only have the 32bit drivers installed and configured in the ODBC settings, the minute we install the 64Bit drivers the software works. Is there a way / Method of getting Excel or the DSN file to NOT use the 64Bit driver, but force it to use the 32bit driver. ANSWERED - But due to low user score i cannot "answer" my own question... Sadly there is no way to-do what i want to-do, without a lot of very nasty and not 100% perfect reg hacks. If you need to access 32bit ODBC data sources the application in question has to be 32Bit. here is a link to just one forum post i found relating to this type of problem, it appears the only way i would be able to accomplish this is to remove the 64bit version of office and install the 32bit version instead of it. http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/5108f337-f06a-4518-afe3-d3c1abd040ef/

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Run a script on user connection on the VM host

    - by Scott Chamberlain
    I have a server running a Virtual Desktop Managed Pool, what I would like to do is when a user logs in I would like a script to check the number of available VMs and if below a threashold add additional VMs to the pool. The script to check the load and add to the pool is not the problem, I have that already figured out: $collectionName = "Test1"; $rdvh = "vmHost.example.com"; $minAvailableVMs = 2; Import-Module RemoteDesktop; $pool = Get-VirtualDesktopCollection -CollectionName $collectionName; $availableVMs = $pool.Size - ($pool.Size * $pool.PercentInUse / 100); $status = Get-VirtualDesktopCollectionJobStatus $collectionName #only add new servers if we are below the threashold and in the JOB_COMPLETEED state if($availableVMs -lt $minAvailableVMs -and $status.Status -eq [Microsoft.RemoteDesktopServices.Management.VirtualDesktopCollectionJobStatus]::JOB_COMPLETED) { Add-RDVirtualDesktopToCollection -CollectionName $collectionName -VirtualDesktopAllocation @{"$rdvh" = 1} } The problem I am having is, how do I run the above script on the Virtualization Host/Connection Broker/Some other server when a user connects?. I don't think it would be appropriate to run this as a logon script inside the VM, I think there is a way to do this on the management side but I don't know the new scripting interface in Server 2012 R2 well enough to know which commandlets I should look for to schedule this. EDIT: I know System Center is perfect for this but I do not have a license and was denied when I asked for it to be added to the budget.

    Read the article

  • How to setup firewall to allow internet connection sharing via Wifi USB stick?

    - by hannanaha
    I have a Windows8 computer linked to the internet via an ethernet cable ("Ethernet" network connection). I have attached to it a DLink Wifi USB stick, and I'm trying to share the main PC's internet connection with my Android phone via a local wifi network. I am using the following batch file to set up this network: netsh wlan set hostednetwork mode=allow ssid=MyWifiName key=password keyUsage=persistent netsh wlan start hostednetwork After I run this script, I can see a new network connection appear in "Control Panel\Network and Internet\Network Connections" named "Local Area Connection *12", and I can see "MyWifiName" on the Android phone. The device name for this connection on the PC is "Microsoft Hosted Network Virtual Adapter". I also set up the "Ethernet" connection to share Internet with "Local Area Connection *12". However, the Android phone usually doesn't manage to obtain an IP from the wireless network, and when it does, there still seems to be no connectivity to the internet. When I turn off the Windows Firewall completely, or even just for "Local Area Connection *12", the Android connection is perfect. My questions are: How should I set up the Windows firewall to allow the phone to connect properly? Is there a specific rule I need to add to the Windows firewall advanced settings? [Note: the above method worked great in Windows 7, without any specific tinkering with the firewall]. Is it safe to turn off the firewall specifically for the "Local Area Connection *12" (the wifi connection) if the main Ethernet connection is still protected by the firewall? Thanks in advance.

    Read the article

  • Connecting a laptop to a TV via HDMI

    - by Madmartigan
    I just bought a new Dell XPS17 laptop (Win7) that only has HDMI output. My last 2 laptops had VGA, which I used to connect to my Sony Bravia 32" TV with no issues, but with the HDMI it's been quite a headache. Drivers for display adapters have been updated to the latest versions: Intel(R) HD Graphics Family NVIDIA GeForce GT 550M I went to a store and plugged in to 4 different TVs from different manufacturers. A sales rep and I spent about 30 minutes being baffled by the results (which are the same as my current TV): Extreme buggy behavior in the Nvidia and Windows display/resolution control panel Can not extend or duplicate displays, can only select one Third and fourth output devices "randomly" detected by the Windows control panel Could not get the screen to fit the output (edges cut off on all sides by about a half inch) Resolution and colors less than perfect. Artifacts around text. Display "randomly" cuts out Defaults to TV output only when plugged in Can not change resolution on either device when connected No audio from the TV Plugged in to 3 monitors from different manufacturers: Defaults to duplicated displays when plugged in Everything works perfectly So far, four people have gone through all the settings in the latop with no luck. I had similar, but not exactly matching results with a different laptop. I'm using the Sony Bravia currently at home, but in order to get it to work I have to turn on the laptop, wait until the display shows up on it, close the lid, then cycle through each output channel on the TV until I come back around to the HDMI port again, but still I have the symptoms described above. However: Once in a while, it just works. Sometimes, seemingly randomly, the output fits the screen perfectly. Sometimes the audio comes through the speakers too, but not always. Usually my screen saver "Mystify" will come up with a message that it cannot be displayed due to a limitation of the video card, but then sometimes it works fine. These 3 things seem to be independent of each other and don't always happen together. So, is there any way to get the laptop to output correctly to a TV, or is it just not meant to be?

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Forward secrecy in Nginx (CentOS6)

    - by Anil
    I am trying to enable Forward secrecy in CentOS with nginx webserver. What I have tried I have read some tutorials and seems like we should have nginx, openssl latest versions to enable it. So I had installed the openssl latest from source. sudo wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz sudo tar -xvzf openssl-1.0.1e.tar.gz cd openssl-1.0.1e sudo ./config --prefix=/usr/local sudo make sudo make install Now OpenSSL supports the Eliptic Curve ciphers(ECDHE). I tested this with openssl s_server also. It worked well. Next, I replaced Nginx with latest. sudo wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.4.2-1.el6.ngx.x86_64.rpm sudo rpm -e nginx sudo rpm -ivh nginx-1.4.2-1.el6.ngx.x86_64.rpm and configured Nginx as described in this link ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+RC4:EDH+aRSA:EECDH:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS; http://baudehlo.wordpress.com/2013/06/24/setting-up-perfect-forward-secrecy-for-nginx-or-stud/ But now Nginx does not support ECDHE ciphers. It supports DHE ciphers. I tried by just enabling ECDHE cipher in nginx still doesn't work. I am using latest web browser(chrome 29 and it support this cipher) Am i missing anything ? Or Having issues with CentOS or Nginx? I read somewhere that ECC patent issues with CentOS, is this causing problem?

    Read the article

  • "Can't find root filesystem / error mounting /dev/root" when booting to new kernel

    - by salparadise
    I am trying to upgrade my kernel from 2.6.18-274 to 2.6.39 for some wireless card drivers. When I boot into the new kernel I get the "Can't find root filesystem / error mounting /dev/root" googling led me to this page http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot From what I am reading seems to be an issue with a driver for my SATA controller or HD, but I can't find what option I need to add to the kernel. Doing a diff from the old initrd to the new one gives me the following: root-> diff /tmp/kafter /tmp/kbefore 6a7,8 > lib/dm-message.ko > lib/dm-region_hash.ko 8a11 > lib/dm-raid45.ko 13d15 < lib/dm-region-hash.ko 16a19 > lib/dm-mem-cache.ko Do I need any of those? not sure if I would need dm-raid45.ko as I am not running a raid. I have the same SATA and IDE options configured for both kernels so not sure what else to look for, any help is appreciated. Additionally here is the HW info: 00:1f.2 IDE interface: Intel Corporation 82801FB/FW (ICH6/ICH6W) SATA Controller (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Hewlett-Packard Company Unknown device 3006 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 233 I/O ports at 1818 [size=8] I/O ports at 1830 [size=4] I/O ports at 1820 [size=8] I/O ports at 1834 [size=4] I/O ports at 14f0 [size=16] Capabilities: [70] Power Management version 2 root-> smartctl -a /dev/sda ... === START OF INFORMATION SECTION === Device Model: WDC WD5000AADS-00S9B0

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >