Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 257/335 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Monitor mode 802.11 captures on OSX

    - by Mike A
    I'm trying to determine the difference between capturing 802.11 frames in the following ways on OSX (10.8.5). It's a bit esoteric, but I use "Option 2" to capture frames for later analysis, and am wondering if I'm missing something. Option 1: use "airportd": $sudo /usr/libexec/airportd en0 sniff Option 2: use "airport" followed by tcpdump: sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport --channel= sudo tcpdump -I -P -i en0 -w /tmp/capture.pcap (or alternatvely eliminate the -w and watch packets real-time). From what I can tell: Both commands, according to the wifi icon on OSX, put the interface into 'monitor' mode. Both commands output a pcap file that is readable in both wireshark/tcpdump & Eye PA. Both commands appear to capture management, control and data frames. The rub: Option 1 disconnects you from the network. This is expected, when putting an interface into 'monitor' mode. Option 2 does NOT disconnect you, provided you've set the channel to the same channel your currently connected to. This has a distinct advantage of keeping your connection up while capturing in monitor mode. My question: Option 2 does not seem like it should work, or more specifically, it does not seem like I should be able to remain connected while also capturing frames in monitor mode. On a wired NIC, you can be 'promiscuous' and still send frames, though I didn't think the same was true for wireless NIC. I'm questioning the validity of capturing frames w/ Option 2?

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • UDP blocked by Windows XP Firewall when sending to local machine

    - by user36367
    Hi there, I work for a software development company but the issue doesn't seem to be programming-related. Here is my setup: - Windows XP Professional with Service Pack 3, all updated - Program that sends UDP datagrams - Program that receives UDP datagrams - Windows Firewall set to allow inbound UDP datagrams on a specific port (Scope: Subnet) If I send a UDP datagram on any port to other, similar machines, it goes through. If I send the UDP datagram to the same computer running the program that sends (whether using broadcast, localhost IP or the specific IP of the machine), the receiver program gets nothing. I've traced the problem down to the Windows XP Firewall, as Windows 7 does not have this problem (and I do not wish to sully my hands with Vista). If the exception I create for that UDP port in the WinXP firewall is set for a Scope of Subnet the datagram is blocked, but if I set it to All Computers or specifically enter my network settings (192.168.2.161 or 192.168.2.0/255.255.255.0) it works fine. Using different UDP ports makes no difference. I've tried different programs to reproduce this problem (ServerTalk to send and either IP Port Spy or PortPeeker to receive) to make sure it's not our code that's the issue, and those programs' datagrams were blocked as well. Also, that computer only has one network interface, so there are no additional network weirdness. I receive my IP from a DHCP server, so this is a straightforward setup. Given that it doesn't happen in Windows 7 I must assume it's a defect in the Windows XP Firewall, but I'd think someone else would have encountered this problem before. Has anyone encountered anything like this? Any ideas? Thanks in advance!

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • How to set up simple VPN for secure Internet connections over unencrypted Wi-Fi on Windows?

    - by Senseful
    I'm looking for a solution similar to the one in this question, except that I don't have a linux computer. I have windows computers that could be set up to accept VPN connections. Preferably I want to set this up on either Windows Server 2003 or Windows XP. I'd like to connect different devices (e.g. iPhone, iPad, laptops, etc.) that are on open unsecure wireless networks (e.g. the one's you see at places like Starbucks) to this VPN to ensure that all my data is secure. I found an article that shows that you can enable VPN connections on Windows XP. After following those steps, though, I'm not sure what to do. Which ports do I open on my firewall? Which VPN settings do I use on my devices such as the iPhone? Do I use L2TP, PPTP, or IPSec? What's the difference between these? Are there any other steps missing in that tutorial? I'm hoping that since Windows has this built in feature, that it will be much simpler to set up rather than having to deal with setting up something such as OpenVPN. If I follow those settings and enable port forwarding on port 1723, and then use the following settings on the iPhone: PPTP (IP Address) RSA SecurID: Off Encryption Level: Auto Send All Traffic: On Proxy: Off It shows "Connecting..." then "Disconnecting..." and the following error message: VPN Configuration A connection could not be established to the PPP server. Try reconnecting. If the problem continues, verify your settings and contact your Administrator. I'm using a user account that I enabled privileges to in the VPN settings on the Windows machine.

    Read the article

  • IIS 6 Windows Authentication in ASP.Net app fails

    - by Kjensen
    I am trying to install an ASP.Net app on an IIS6 webserver. The site requires the user to authenticate with windows, and this works on several other apps on the same server. In IIS I have enabled anonymous access and windows authentication. In web.config, authentication is set to: <authentication mode="Windows"/> and authorization...: <authorization> <allow roles="Users"/> <deny users="*"/> </authorization> Ie. allow all users in role "Users" and deny everybody else. This is the approach that is working with several other apps on the same server. If I run the site, I am prompted for username and password. If I remove the line: <deny users="*"/> I can access the site and everything works - but the user credentials are not passed to the site (Page.User.Identity.Name returns a blank string in ASP.Net). The site has identical (inherited) file permissions as other working sites on the server. The only difference in authentication/authorization between this site and the other working sites is, that this runs Asp.Net 4 (but there are other working asp.net 4 sites on the server as well). What am I missing here? Where should I look?

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Setup a Reverse Proxy with Nginx and Apache on EC2

    - by heavymark
    Good Day, I am currently using the free Amazon EC2 micro instance to learn Linux and server setup. I wish to setup Nginx as a reverse web proxy. I found a great article on mediatemple on how to do it: http://wiki.mediatemple.net/w/Using_Nginx_as_a_Reverse_Web_Proxy The directions work for most any server except for EC2.One difference between EC2 and MediaTemple is how IPs work. Overall EC2 instances do not know their elastic IP. So when following the wiki directions in the virtual hosts for instance instead of myip:80 for instance I put *:80. When just using Apache this works perfectly. In the apache virtual hosts I did "127.0.0.1:80" and in the Nginx I put *:80. Apache restarts, by Nginx provides an error that it cannot bind because the ip is already in use. If I could add an actual IP in the Nginx file it would work but since EC2 requires me to put in the asterisk it ends up conflicting with the apache virtual hosts entry. Anyone know a simple way around this (other than not using EC2) ;-) Thank you! Cheers, Christopher

    Read the article

  • w2k3 AD DC Demotion fails with "no other AD DC for that domain can be contacted"

    - by Kstro21
    i've a small office with a single w2k3 sp2 DC(bad idea, but it is real), now, i want to make a clean install of that pc, so, i got another one, install w2k3 sp2, add it to the domain, dcpromo and set it to be a GC, untill now everything is ok, then tried to dcpromo in the primary DC, but it fails with The box indicating that this domain controller is the last controller for the domain mydomain.com is unchecked. However, no other Active Directory domain controllers for that domain can be contacted. Do you wish to proceed anyway? If you click Yes, any Active Directory changes that have been made on this domain controller will be lost. So, i started to move all the roles to the new server as described here, when all was ok with the roles, i tried doing the same, but got the same result. Tried moving the DNS to the new server, but it doesn't make difference. Shutdown to the old server, then tried to log into a workstation, but it fails saying the domain is not available, also coudln't add new workstation to the domain, so i have to power on the old server again. So, if i successfully move all the roles and dns to the new server: why dcpromo give such message in the old server? why if i shutdown the old server the domain is not available?? if i successfully move all the roles and dns to the new server, and i click yes when dcpromo give warning in the old server, will i lose all users, computers, ou, etc.? am i missing some steps to make this work?? hope you can help me thanks

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • ESXi 5 Guests will not boot

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. Note: Investigation eventually determined that problem was with the VMware client on a Lenovo Edge laptop, the only Windows box available in a Linux IT shop. vSphere Client v4 and v5 duplicated behavior on the Lenovo Edge. As indicated in the comment to the accepted answer, replacing the workstation with one using different video was the "fix" for this particular issue. The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. No errors in the Events tab. Switching Guest from BIOS to EFI makes no difference. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspicious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space. Edit 4: Guests copied from my old ESXi 4 server DO NOT boot on the ESXi 5 system. Initially it complains about too little Video RAM being configured for the default 2500x1600, but it still doesn't work properly even after I bump the Video RAM settings or switch it to Auto-Detect.

    Read the article

  • How to connect 2 routers (Asmax and D-link)

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • rhel configure: limit root direct login to systems except through system consoles

    - by zhaojing
    I have to configure to limit root direct access except system consoles. That is, the ways of telnet, ftp, SSH are all prohibited. Root can only login through console. I understand that will require me to configure the file /etc/securetty. I have to comment all the tty, just keep "console" in /etc/securetty. But from google, I found many peoples said that configure /etc/securetty will not limit the way of SSH login. From my experiment, I found it is. (configure /etc/securetty won't limit SSH login). And I add one line in /etc/pam.d/system-auth: auth required pam_securetty It seems root SSH login can be prohibited. But I can't find the reason: What is the difference of configure pam_securetty and /etc/securetty? Can anyone help me with this? Only configure /etc/securetty could work? Or Have I to configure pam_securetty at the same time? Thanks a lot!

    Read the article

  • Administrative shares in Windows 7 Pro not visible

    - by Chris Tybur
    My desktop machine has a clean install of Windows 7 Professional. For some reason the standard administrative shares Admin$, C$, D$, etc are not visible, either in Computer Management - Shared Folders - Shares or via net share. I also have a laptop with a clean install of Windows 7 Professional, and I can see the admin shares in both places. As such, I can map to \\laptop\c$ from the desktop, but I can't map to \\desktop\c$ from the laptop. I pretty much took the defaults during the Windows 7 installations. I've tried adding LocalAccountTokenFilterPolicy to the registry on the desktop, but that didn't work. On the desktop I've also disabled UAC, turned off Windows firewall, removed it from a homegroup, made sure file and printer sharing is turned on, but nothing has worked. There is some subtle difference between the two machines that I can't seem to find. I'm logging into both machines using a local account that is in the Administrators group. Both accounts have the same name and password. I really don't want to have to create a new share for the desktop's C drive, especially since C$ is visible and working on the laptop and therefore I should be able to make it work on the desktop. Any idea why the admin shares would work on one machine and not another? Or why LocalAccountTokenFilterPolicy would fail?

    Read the article

  • Do eSATA HDD docking stations have a capacity limit?

    - by Michael Kjörling
    I'm looking at perhaps buying an eSATA docking station to be able to easily plug in and unplug hard disk drives, particularly but not necessarily only for backup purposes. Note: This is not a hardware shopping recommendation question. Please don't vote to close it as such. Looking at different models, I find for example this page detailing the Deltaco SI-7908SUS which specifically states "storage capacity: 1.5 TB" as well as "pictured hard disk not included, only for illustration". A customer review specifically mentions that it does not work with 3 TB drives, although does not go into any detail such as OS, drive model, etc. From a brief glance, the vendor's web site does not appear to say either way. Then there is the quite similar Deltaco SI-7908B3 which boasts on the box "all 2.5" and 3.5" HDD/SSD compatible". My question is: Why would what basically amounts to a SATA/eSATA adapter have any say in what storage capacity devices are supported? Does it? Assuming the OS supports the full capacity of the drive, why should introducing another (not even a different, really) connector change anything? Bonus question: Might it make a difference if the docking station exposes multiple interfaces (such as in the case of for example the SI-7908SUS exposing USB 2.0 and eSATA)? (I still think it shouldn't, but it'd be nice to have it confirmed.)

    Read the article

  • Postfix character encoding?

    - by Camran
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Erratic WiFi 2.4 GHz channel spikes, what gives?

    - by Francis W. Usher
    Sorry guys, first a gripe about my neighbor's WiFi access point (it is related): they totally hog the center nine 2.4 GHz channels (3-11), centered right at 7! I know the outer regions of the signal don't make as much of a difference, and technically they're running channels 5 & 9. Anyway, their signal is clearly interfering with mine, which is necessarily centered at 3 or 11 to evade their interference. I guess it's somewhat a case of access point envy: they happen to have both a stronger signal and a higher data rate, while occupying twice the band width that I do. Getting to the point, I've noticed that they tend to sit nice and pretty centered at 7, but they definitely auto-select their channel, and I've noticed that the auto-selection algorithm tends to shift towards the higher channels; hence I decided to pick channel 3, and I don't get so many intermittent lag spikes any more. Anyway, the thing that weirded me out was the reason they have to auto-select sometimes: unexplained, powerful (talking order of 0dB here), giant spikes of 2.4 GHz activity in consistent regions of the spectrum. I don't think it's just noise, since my wireless monitoring software is registering a MAC address, a manufacturer, and usually a fairly coherent ascii name... and it seems to be a fairly well-confined signal. But these signals are fairly common, and they do some weird stuff to my signal. So my question is what are these signals? Where are they coming from? Where are they going? Why are they so ridiculously strong? Why don't they ever last very long? Here's an inSSIDer screenshot I took, for your perusal. I am labeled with "me", my greedy neighbor labeled with "neighbor", and the 2 quasar signals are labeled with "WTF?".

    Read the article

  • windows 7 turns off itself everyday at about 9am

    - by Radek
    I was given this comp at work and after a week or so this strange thing started to happen in the middle of doing something :-( It turns off itself at about 9.10am every morning Just once a day and it works fine after it had it little nap. it happens both if the comp was on all night or I turned it off before leaving the office. I tried to swapped the memory as I was told that there was an issue with memory. But moving or removing any memory did not make any difference. I am not aware that I would install any program that could cause that. I installed AVG and set it up to do every day scan about 8am + if restart is needed it requires user confirmation. 'The Software Protection service has stopped' few minutes before it turned off itself but it happened also at other times without the computer turned off. I turned off Windows automatic updates as the first thing when I got the comp configuration Windows 7 Ultimate Intel Core2 Quad Q9550 at 2.8GH, 8GB RAM ST31000528AS Barracuda 7200.12 SATA 3Gb/s 1TB Update Below message are logged when I turn the comp on again. Message 1: The previous system shutdown at 9:08:54 AM on ?6/?29/?2010 was unexpected. Message 2: Level:Critical Source: Kernel Power EventID 41 Task Category (63) The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Log after manual restart Update 2 (task scheduler)

    Read the article

  • 5.1 Surround Channels are Jumbled

    - by stickynips
    I had this exact setup working previously, but after a reformat it went screwy on me. I have an Onkyo A/V Receiver hooked up to my PC, via optical S/PDIF. Attached to the receiver is a 5.1 speaker setup (tested and working fine with my Xbox via the receiver). It seems to me that the audio channels are getting mixed up somehow between the PC and the receiver. I have a 5.1 test file which plays a sounds through each speaker individually. The channels are mixed as such: "Left Front" plays through my Right Front speaker "Center" plays through my Left Front speaker "Right Front" plays through my Center speaker "Left Rear" plays through my Subwoofer "Right Rear" plays through my Left Rear speaker I've tried downloading the latest Realtek HD Audio Drivers and the Realtek HD Audio Manager, but neither makes any difference. If there's a way I can manually rearrange the channels I believe it would fix the problem, but as far as I know this is impossible. edit: Sorry, I've forgotten some basic info. I'm running Windows 7 x64. The sound card is Realtek ALC892 embedded in a GIGABYTE GA-890GPA-UD3H AM3 motherboard.

    Read the article

  • AMD Phenom II X2 555 BE, core unlock suddenly not working?

    - by user328271
    I've had a Phenom II X2 555 BE for around 2 - 3 years. When I got it, I immediately core unlocked it with my ECS A880GM-A3 mobo, which makes it turn into a Phenom II X4 B55. A few days ago, I installed Windows 7 64 bit to compensate for my 4 gigs of ram. When I start my system with its cores unlocked, it will restart after the BIOS screen. If I disable the core unlock, it boots to OS just fine. My question is: Does 64 bit OS makes a difference in core unlocking? Does my 3rd and 4th core burnt out? Also extra info: I tried core unlocking but keeping the 3rd and 4th core disabled and it still won't boot into OS. Could it be motherboard problems? Sorry for bad English. I will try to give additional information if needed. Thanks! Also it is worth mentioning I'm no computer expert but I tried to make my explanation as short as possible. I also asked my question on TomsHardware, but I had no answer till now so I decided to ask here too. anyone...?

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Photoshop CS4. How do I make sure my color stays the same in my different .psd files? (could be RGB

    - by Kris
    I asked 2 photoshop experts I know but they haven't got a clue because all my settings are exactly the same, in both files. (except the RGB type !! I'm not sure. Please read on) I use RGB color, 72DPI, 8 bits / channel. No adjustments (filters, like greyscale, etc ...) are selected / used. The layers are both normal, and opacity and fill are 100% (yes, in both files). I took two screenshots, and you can see the difference: http://www.flickr.com/photos/30465871@N05/4623864297/ http://www.flickr.com/photos/30465871@N05/4624469754/ Both colors are ff795d, but that doesn't matter, any color I use gives me the same problem: they both look different. Now, I know the CMYK settings (see screenshots) are different, but when the settings are the same the color changes. Why is this happening and how do I solve this problem? My guess is I'm working with different a type of RGB. It's sRGB IE61966 - 2.1 in the file I created (file info raw data) but I can't find that in the file that started with a screenshot. If that's the problem, how do I change / convert the RGB, once the file is already open? Thank you.

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >