Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 513/717 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Cannot set up dual monitors correctly in Fedora15 with KDE.

    - by adivasile
    I have 2 monitors: 24" LCD connected via DVI(primary) 19" LCD connected via VGA(secondary) Everytime Fedora starts the second display is always set to clone the first one and they both run at 1280x1024 and I always have to disable the 19" monitor, in order for the bigger one to run at 1920x1080. I want to set them up so that my secondary monitor extends the primary one.The problem is that no matter what kind of configuration I choose it has no effect.My secondary monitor remains disabled. I've tried using both the Display manager from KDE and the ATI Control Panel and the behaviour is always the same.The moment I click apply, the screen flickers and nothing changes. I've succesfully used the extended setup in Fedora15 with Gnome3. I have a RadeonHD 4300 series videocard and I'm using the drivers downloaded from the AMD site. This is the output of xrandr -q : Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 1920 x 1920 VGA-0 connected (normal left inverted right x axis y axis) 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.0 70.1 66.0 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 75.0 72.8 66.7 59.9 720x400 70.1 DVI-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0*+ 60.0 1680x1050 59.9 1600x900 60.0 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1280x720 60.0 1152x720 60.0 1024x768 75.0 60.0 832x624 74.6 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 Later edit: The problem seems to come from the ATI drivers.I managed to set up the monitors like I wanted after I uninstalled the drivers. Unfortunately I'm working on an OpenCL project so I had to reinstall them.The moment I did that, all my previous settings were forgotten and I was back to square one.

    Read the article

  • Launch synergy client on boot in Mac OS X

    - by Herms
    I have a mac as a secondary machine at work. Currently I use synergy on my main machine to share its keyboard and mouse with the mac. I created a launch agent for my user to launch synergy when I log in, and that's working. However, this means I still have to pull out the mac's keyboard and mouse in order to log in. I tried making a user daemon so that it would launch on boot, but I get the following errors in the console: LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Warning>: 3891612: (CGSLookupServerRootPort) Untrusted apps are not allowed to connect to or launch Window Server before login. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : On-demand launch of the Window Server is allowed for root user only. LaunchSynergy[52] Tue Jul 14 12:41:44 testmacpro.local synergyc[52] <Error>: kCGErrorRangeCheck : Set a breakpoint at CGErrorBreakpoint() to catch errors as they are returned LaunchSynergy[52] _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. Is there a way to get this to work? Looks like the Mac's security doesn't want to allow anything to take control of the window while at the login screen. I can understand that, but I'd like a way to override it, as it would make my life a lot easier.

    Read the article

  • Why won't 2GB of ram across 3 of 4 slots work on my motherboard (max 2GB)?

    - by Andrew
    My desktop is an old home-built machine circa 200[5-6] running Ubuntu 11.10 (but this is not relevant because I'm reading available ram from BIOS loading screen), with an ASUS P5GPL motherboard, not X or X-SE - it has four slots. I'm mainly a laptop person, but keep this around for running a server from if needed, backing up to, seeding Ubuntu to people from, etc… It has four (DDR) ram slots, two black and two blue, in the order black-blue-black-blue (I will call them D, C, B, and A, respectively) with some space in the middle. The blue ones are the closest to the processor. I used to have two 512MB chips in the two blue slots. I just got a 1GB chip and plugged it into one of the black slots; my system didn't recognize it. I messed around and discovered that it will not recognize chips in many positions, and I couldn't get it to recognize all three of these chips at the same time. In particular, if I put the 512MB chips in A and B it would only use 1, but AC, AD, BD, and CD worked. I didn't try BC, I believe. Only some of these continue to work when I switch the 1GB chip into one of these positions. Can I have some advice as to how to position these chips to get all 2GB used? How about if I get another 1GB chip - where should I put the two? And what about the RAM maximum Crucial says? Can I go above 2GB, if I get another 1GB chip? Right now, I have a 512MB chip in A and the 1GB chip in C. EDIT: I read some other posts and tried dmidecode in Ubuntu to clarify the max memory question, that wasn't a major part anyways. It says my max memory module size is 1024M (OK) and my max memory size is 4096M (doesn't agree with Crucial OR the Asus web site, maybe it will only work while in Linux and BIOS won't OK it?).

    Read the article

  • How to install QEMU on Damn Small Linux?

    - by user2934303
    i'm trying to install QEMU on a Damn Small Linux installation in order to emulate pentium features in a 486 computer. Though DSL was descontinued, it's the only linux that runs reasonably on the 486 processor, most recent kernels doesn't even boot on 486 architecture. I tried Tiny Core Linux, but it doesn't work in 486, so i seem to have no escape here. The most recent image of DSL is from 2008, it uses kernel 2.4.x, and i couldn't find a way to compile QEMU on it. Firstly, it lacks several compile tools needed for compiling it, and, it have several dependency problems. I tried some pre-compiled packages, but the only one that worked was a QEMU 5.2 RPM package (it didn't had dependency problems), and it was way too old, it wasn't capable of running windows yet, it just gave me the option of emulating a code, not a full OS as windows, and it also didn't give me the option to choose which architecture i wanted it to emulate (-cpu option). Can anyone help me with this? Also, if someone can think of some alternative to it, i'd be grateful. Thanks.

    Read the article

  • Apche ssl is not working

    - by user1703321
    I have configure virtual host on 80 and 443 port(Centos 5.6 and apache 2.2.3), following is the sample, i have wrote the configuration in same order Listen 80 Listen 443 NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . </VirtualHost> then i have define 443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.be.crt SSLCertificateKeyFile /etc/ssl/private/abc.be.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_be.crt </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.fr.crt SSLCertificateKeyFile /etc/ssl/private/abc.fr.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_fr.crt </VirtualHost> First ssl certificate for abc.be is working fine, but 2nd domian abc.fr still load first ssl. following the output of apachictl -s VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.abc.be (/etc/httpd/conf/httpd.conf:1071) port 443 namevhost www.abc.fr (/etc/httpd/conf/httpd.conf:1071) Thanks

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • nginx inserting extra characters in Multi-status reply body

    - by user125011
    Here's the setup. I've got one server running apache/php hosting ownCloud. Among other things, I'm using to do CardDAV contact syncing. In order to make things work with my domain I have an nginx server running on the frontend as a reverse-proxy to the ownCloud server. My nginx config is as follows: server { listen 80; server_name cloud.mydomain.com; location / { proxy_set_header X-Forwarded-Host cloud.mydomain.com; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-For $remote_addr; client_max_body_size 0; proxy_redirect off; proxy_pass http://server; } } The problem is that when my phone does a PROPFIND on the server, nginx adds extra characters to the content body that throw the phone off. Specifically, it prepends d611\r\n at the front of the body and appends 0\r\n\r\n to the end of the content. (I got this from wireshark.) It also re-chunks the result. How do I get nginx to send the original content as-is?

    Read the article

  • Windows 7 Users unable to add Windows 2003 server printers

    - by TravBrack
    Hi there I just rolled out a few Windows 7 x64 machines and ran into this issue where non-admin users are unable to add printers hosted on a windows 2003 server. It works fine on a 2008 server. The issue appears to be with the point and print system. A user will attempt to add the printer, a prompt will come up requiring the user to elevate privileges in order to install a driver, and will fail citing 'access denied'. I found the group policy setting Point and Print Restrictions: When the policy setting is disabled: -Windows Vista computers will not show a warning or an elevated command prompt when users create a printer connection to any server using Point and Print. So I disabled it, verified that the policy was being picked up using rsop, but it still does the same thing. I've also tried the following: Recreating the printers using newer drivers Adding the printer using 32 bit drivers on the 2003 machine, then adding the 64 bit drivers on a Windows 7 machine Adding the printer from a windows 7 machine using print management None of these things work. The security settings are no different than the working printers. Help?

    Read the article

  • can't figure out why apache LDAP auth fails

    - by SethG
    Suddenly, yesterday, one of my apache servers became unable to connect to my LDAP (AD) server. I have two sites running on that server, both of which use LDAP to auth against my AD server when a user logs in to either site. It had been working fine two days ago. For reasons unknown, as of yesterday, it stopped working. The error log only says this: auth_ldap authenticate: user foo authentication failed; URI /FrontPage [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server], referer: http://mysite.com/ I thought perhaps my self-signed SSL cert had expired, so I created a new one for mysite.com, but not for the server hostname itself, and the problem persisted. I enabled debug-level logging. It shows the full SSL transaction with the LDAP server, and it appears to complete without errors until the very end when I get the "Can't contact LDAP server" message. I can run ldapsearch from the commandline on this server, and I can login to it, which also uses LDAP, so I know that the server can connect to and query the LDAP/AD server. It is only apache that cannot connect. Googling for an answer has turned up nothing, so I'm asking here. Can anybody provide insight to this problem? Here's the LDAP section from the apache config: <Directory "/web/wiki/"> Order allow,deny Allow from all AuthType Basic AuthName "Login" AuthBasicProvider ldap AuthzLDAPAuthoritative off #AuthBasicAuthoritative off AuthLDAPUrl ldaps://domain.server.ip/dc=full,dc=context,dc=server,dc=name?sAMAccountName?sub AuthLDAPBindDN cn=ldapbinduser,cn=Users,dc=full,dc=context,dc=server,dc=name AuthLDAPBindPassword password require valid-user </Directory>

    Read the article

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Debian Squeeze vzquota

    - by benjamin
    Hello, Apparently, I got Debian Squeeze (Debian 6) to work on a VPS using debootstrap and chroot as described here. Subsequent installation of the harden, exim4, mysql-server packages failed partially. Relevant information: insserv: warning: script 'S10vzquota' missing LSB tags and overrides insserv: warning: script is corrupt or invalid: /etc/init.d/../rc6.d/S00vzreboot insserv: warning: script 'vzquota' missing LSB tags and overrides insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: loop involving service stop-bootlogd at depth 2 insserv: loop involving service vzquota at depth 1 insserv: loop involving service rsyslog at depth 1 insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing exim4-base (--configure): subprocess installed post-installation script returned error exit status 1 Any suggestions? Keywords: vzquota debian squeeze installation vps, virtual private server.

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Domain registrar transfering

    - by Mike Weerasinghe
    In 2004 I registered a domain name when I opened an account with DiscountASP.NET. I presume my domain registration was handled by a reseller. A domain tools who is search shows that registration services are provided by Znode LLC. I changed hosting companies and need to change DNS servers to point to my new hosting company but I have no idea how to do that. There is no control panel I can access. Ideally I would like to transfer registrar's. I emailed Znode support but I have not received any response. I called and left a message and they have not called back. My new hosting company wants an EPP authorization code in order to transfer my domain. I guess I need to get it from Znode LLC. Anyone have any ideas on how I might go about transferring my domain over to a new registrar? The domain name has not expired and is currently active. Thanks in advance for your help.

    Read the article

  • Is it possible to detect nearby Wi-Fi enabled devices, not necessarily on the same network? [closed]

    - by Sky
    first question on StackExchange ever. I hope I got the right board. I'm trying to create a device (either from a standard AP or some other unconventional means) that will be able to detect nearby Wi-Fi enabled devices. For example, if a cellular phone (iPhone for instance) would be carried into the secured area, its MAC address will be logged. A cellular phone is a good example because it's the most common threat that should be detected. Some important points: The detection can be either active or passive, doesn't matter. The detected device might be connected to a different network, or might not be connected to anything at all. I assume most cellular phones are actively probing when not connected, but I'm not sure. It is important to not only identify the breach, but also to identify the device (MAC address). Conventional hardware is only optional. Distance of detection is at least 6 meters (20 feet). Handling one device at a time is good. Speed of detection is important, under 5 seconds is ideal. So my question is, is this even possible? If so, what can I use in order to make this a reality? Thank you for reading!

    Read the article

  • The physical working paradigm of a signal passing on wire.

    - by smwikipedia
    Hi, This may be more a question of physics, so pardon me if there's any inconvenience. When I study computer networks, I often read something like this in order to represent a signal, we place some voltage on one end of the wire and the other end will detect the voltage and thus the signal. So I am wondering how a signal exactly passes through wire? Here's my current understanding based on my formal knowledge about electronics: First we need a close circuit to constrain/hold the electronic field. When we place a voltage at somewhere A of the circuit, electronic field will start to build up within the circuit medium, this process should be as fast as light speed. And as the electronic field is being built up, the electrons within the circuit medium are moved, and thus electronic current occurs, and once the electronic current is strong enough to be detected at somewhere else B on the complete circuit, then B knows about what has happend at A and thus communication between A and B is achieved. The above is only talking about the process of sending a single voltage through wire. If there's a bitstream and we need to send a series of voltages, I am not sure which of the following is true: The 2nd voltage should only be sent from A after the 1st voltage has been detected at B, the time interval is time needed to stimulate the electronic field in the medium and form a detectable electronic current at B. Several different voltages could be sent on wire one by one, different electronic current values will exists along the wire simutaneously and arrive at B successively. I hope I made myself clear and someone else has ever pondered this question. (I tag this question with network cause I don't know if there's a better option.) Thanks, Sam

    Read the article

  • Encoding over SSH Issues

    - by user1104160
    I have a Linux machine and a Windows machine, both using Vim with the Powerline plugin. They both work great with patched fonts. Next, I want to SSH onto an OSX 10.6 machine and also use the Powerline in the terminal with Vim. However, I get weird symbols with normal mode ("^^B" in one area) and fancy mode ("~@" and "~B" spread throughout the bar. I thought this mixup was an encoding issue, but when I look at Putty's encoding it is using UTF-8 and the same with the Ubuntu terminal. Additionally, on the OSX machine, "locale" returns "en_US.UTF-8" for all variables (I set it to do that in order to troubleshoot). However, the symbols are still showing. I am using a patched font (Inconsolata, the same one as the Ubuntu terminal) for the OSX terminal, so I am stumped. Is there a missing component to this equation? Are there additional problems that can arise from SSH encoding? On the OSX end, additionally, these same symbols appear, so it may not even be related to SSH and therefore I'm totally lost.

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • How can I get WAMP and a domain name to work on a non-standard port?

    - by David Murdoch
    I have read countless articles on setting up a domain on WAMP to listen on a port other than 80; none of them are working. I've got Windows Server 2008 (Standard) with IIS 7 installed and running on port 80 (and 443). I've got WAMP installed with the following configuration. Listen 81 ServerName sub.example.com:81 DocumentRoot "C:/Path/To/www" <Directory "C:/Path/To/www"> Options All MultiViews AllowOverride All # onlineoffline tag - don't remove Order Allow,Deny Allow from all </Directory> localhost:81 works with the above configuration but sub.example.com:81 does not. Just to make sure my firewall wasn't getting in the way I have disabled it completely. My sub.example.com domain is already pointing to my server and works on IIS on port 80. Also, if I disable IIS and change the Apache port from 81 to 80 it works. Yes, I am restarting Apache after each httpd.conf change. :-) I don't need any other domain (or sub domains [I don't even care about localhost]) configured which is why I'm not using a VirtualHost. Any ideas what is going on here? What could I be doing wrong? Update Changing Listen to 80 but keeping ServerName as sub.example.com:81 causes navigation to sub.example.com:80 to work; this just doesn't seem right to me. Could ServerName be ignoring the :port part somehow? netstat -a -n | find "TCP": >netstat -a -n | find "TCP" TCP 0.0.0.0:81 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:912 0.0.0.0:0 LISTENING ... TCP 127.0.0.1:81 127.0.0.1:49709 TIME_WAIT ...

    Read the article

  • How to create VirtualHost in Ubuntu 12.10

    - by Mifas
    I had followed many articles to 'How to create VirtualHost in Ubuntu'. This is what have I done Installed Apache sudo apt-get install lamp-server^ phpmyadmin I created folder called site1.com in /var/www/ Then I have created the file in /etc/apache2/sites-available/site1.com Then added the following code to that site1.com file <VirtualHost *:80> ServerName www.site1.com ServerAdmin [email protected] ServerAlias site1.com DocumentRoot /var/www/site1.com # Other directives here <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site1.com/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Then after that I edit the host file added the following line of code 127.0.0.1 site1.com Edit Also I enable the site1.com via sudo a2ensite site1.com Then i restart the apache serivice. (Even i restarted the pc) When I go to the site1.com, It will say The connection has timed out Error Message. But I can browse via localhost/site1.com. I have been trying since last two days. No solution. And followed many articles and videos.

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >