Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 655/734 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Sony VGN-NR260E "External Device Boot"

    - by user72158
    [A LITTLE BACKGROUND] On all modern Dell computers pushing the F12 on bios boot will allow for a screen that lets you choose what boot option you need. For example if I want to boot off of a USB flash drive to boot into a live Linux distribution in order to clean virus's on netbooks that do not have CD drives to boot from I would push F12 and choose USB device from the list of options. If this does not show up then I can always go to the F2 bios setup and choose flash drive to be the first option. When I restart the computer it will boot into the flash device. I understand that I can purchase an external USB CD drive and then boot from that. I do not want to use that option. The reason for using a flash device instead of a CD is: A: This USB flash device has several different boot OS's on it that are used. B: The antivirus disks are updated often and burning cd's and throwing away others is wasteful compared to simply updating a flash drive. There is nothing wrong with the flash drive. It works perfect on many other PC's. [PROBLEM] Booting this flashdrive has been working for years on hundreds of computers... I just have this ONE computer that I cannot figure out how to get it to boot on... I have a Sony Vaio that will not boot to this device. I've tried pushing every key combo I can think of (F12, Esc, Del, F10...) and none of these key combinations will bring up the boot menu. I chose F2 and went into the bios and changed the first boot device to USB flash device. This did not work either. There is an astrix next to the device and the note states: "This Drive is available when External Device Boot is Enable." [WHAT I NEED] I need to know How to enable External Device Boot on the Sony Vaio VGN-NR260E laptop. OR How to bring up the Boot Menu to allow me to boot off a flash device. Thanks for anyone that can help!

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • Nginx fastcgi problems with django

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody?

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Primary/secondary ethernet interfaces via NetworkManager in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local... EDIT: Here's my routing table: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 172.16.151.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 172.16.30.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.0.1 0.0.0.0 UG 0 0 0 eth1 I want the 172.16.30.2 network to be the primary one and the 10.1.0.0 network to be the secondary one. EDIT2: My nameservers are also incorrect. It seems like Ubuntu is bringing the networks up in order, eth0, then 1, then 2, and the DHCP information from eth1 is overriding eth0, and eth2 is overriding eth1. How can I reverse this so the DHCP information from eth0 is the "master"? EDIT3: This seems to be an issue with Gnome's NetworkManager.

    Read the article

  • Is there a switch that will connect directly to my modem and allow my router to serve only as a WiFi connection?

    - by Abner
    Details . . Devices . Internets -50Mbps Cable Internet Modem - Motorola Surfboard Extreme Router - Netgear WNDR3700v3 Switch - D-Link DGS-1008G Wired Ethernet Cable - Cat6_24Awg_ Device Configuration - Modem\Router\Switch . . Internet Usage . Wired Demand XBOX 360 1 Gaming PC 2 PC - HD video . WiFi Demand 3 android + 1 Laptop for browsing and group video chat simultaneously . . Specifics . I am experiencing problems with network speeds and reliability on both wired and wireless connections. On many occasions I experience WiFi Speeds that vary between the 15mbps to 0.50 mbs (or less) and ping ranging from 15ms to 500ms. These results are from when I notice problems with internet lag and run speedtest.net to get details of problems. I have a stretched out floor-plan and old building materials drastically affecting my cellphone signal strength as well). After Reading the "Known Issues" Section on the webpage below http://www.dd-wrt.com/wiki/index.php/Netgear_WNDR3700#Known_Issues I bought the switch and Cat6 cable to increase speed and relieve stress on router in an attempt to fix the symptoms. I thought I'd use the router in a Modem\Switch\Router configuration. I thought I'd only have to use the router for mobile WiFi connections like android or Laptops when necessary (hopefully eliminating the problem caused by the router when subjected to all those demanding Ethernet connections) When I started unboxing the switch, I noticed the manual of this DGS-1008G shows it being connected in the Modem\Router\Switch order and not in the Modem\Switch\Router configuration I was aiming for. I have not been able to find a solid plan to remedy my specific problem without buying another expensive router. I would like to get the speeds I am paying for without buying another router. (My WiFi Adapters would also need to be updated if new router is required, meaning more $$$). I can always sell the switch and get a better one that will bypass the router because my most demanding internet connections are Wired. . . Questions Can I accomplish a Modem\Switch\Router configuration with current switch? Is there a different way to get the wired speed I need while providing WiFi only when necessary? . .

    Read the article

  • mixing different technologies using ARR reverse proxy

    - by Jaepetto
    I'm currently trying to put together a proof of concept on mixing various technologies onto one web site in order to ease migrations and add flexibility. The idea is to create one 'mashup' site behind an IIS 7.5 ARR reverse proxy. For the time being the ARR reverse proxy forwards all request to our main site. The request are as follow: client -> ARR: Get / ARR -> Server 1: Get / Server 1 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ...so far so good. Let's say, I want to add a new site (root of another server) as a subsite of my main website. a simple inbound rule does the trick: <rule name="sub1" stopProcessing="true"> <match url="^mySubsite(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false" /> <action type="Rewrite" url="http://server2/{R:1}" /> </rule> The requests now are: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ... still ok. The issue comes when the site on server2 sends a redirection (e.g. to a login page). In the case of SharePoint, it will redirect the user to: /_layouts/Authenticate.aspx?Source=%2F ...which does not exists: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 301: /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 301: /_layouts/Authenticate.aspx?Source=%2F client -> ARR: Get /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 404: Not Found Does anyone know a way write the outbound rule to rewrite the response from server 2 "301: /_layouts/Authenticate.aspx?Source=%2F" to "301: /mySubsite/_layouts/Authenticate.aspx?Source=%2FmySubsite%2F"?

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • Apache2 Virtual Host broken; displayed default index.html on subdomain, but correct content on www.subdomain

    - by Robert K
    I've got a Linode configured as a Ubuntu 10.04.2 web server with Apache 2.2.14. I have a total of 4 sites, all defined under /etc/apache2/sites-available as virtual hosts. All sites are almost identical clones for configuration. And all sites but my last work successfully. default: (www.)exampleadnetwork.com (www.)example.com reseller.example.com trouble: client1.example.com I keep getting this page when I visit the client1.example.com site: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. In my ports.conf file I have the NameVirtualHost correctly set to my IP address on port 80. If I access the "www.sub.example.com" alias the site works! If I access it without the www I see the "It Works" excerpt posted above. Even apache2ctl -S shows that my vhost file parses correctly and is added to the mix. My vhost configuration file is as follows: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName client1.example.com ServerAlias client1.example.com www.client1.example.com DocumentRoot /srv/www/client1.example.com/public_html/ ErrorLog /srv/www/client1.example.com/logs/error.log CustomLog /srv/www/client1.example.com/logs/access.log combined <directory /srv/www/client1.example.com/public_html/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </directory> </VirtualHost> The other sites are variations of: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName example.com ServerAlias example.com www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined </VirtualHost> The only site the differs is the other subdomain: <VirtualHost 127.0.0.1:80> ServerAdmin [email protected] ServerName reseller.example.com ServerAlias reseller.example.com DocumentRoot /srv/www/reseller.example.com/public_html/ ErrorLog /srv/www/reseller.example.com/logs/error.log CustomLog /srv/www/reseller.example.com/logs/access.log combined </VirtualHost> Filenames are the FQDN without the www. prefix. I've followed this advice, but still cannot access subdomain properly.

    Read the article

  • Fix single entry from mbr

    - by Sander
    I use EasyBCD to manage my tripleboot of (1) Windows Server 2008 R2, (2) Windows 7 Professional and (3) Ubuntu Linux. While trying to change the order of my boot menu I ended up losing the Windows Server entry. Luckily I had a boot menu backup (.bcd file) that allowed me to restore my boot menu using EasyBCD. However, when I now select the Windows Server option in my boot menu the Windows Server Recovery Environment starts up. So I have to select language/keyboard layout/etc. and then I have 3 options as shown in the image below. . My goal is to fix the one corrupted Windows Server entry from my boot menu without messing up or losing the two other ones. I'm guessing the Recovery Console (Command Prompt) is the next step and that I will be needing bootrec.exe. But when consulting this page: Use the Bootrec.exe tool in the Windows Recovery Environment to troubleshoot and repair startup issues in Windows (about half way down there's a link that shows the bootrec.exe options) I'm getting uncertain. The page lists 4 options for bootrec.exe : /FixMbr /FixBoot /ScanOs /RebuildBcd What option do I need to fix just the server entry of my boot menu? Thanks in advance, Sander P.S. All three OS's are on the same physical disk (3 different partitions). Disk layout: System reserved (primary partition, 100 MB) Windows 7 (primary parition, 150 GB) Windows Server 2008 (primary partition, 150 GB) Extended partition (linux partitions (/,/swap,/home), 150GB + data partition, 150 GB) P.P.S. This is what my boot menu looks like using EasyBCD (Detailed/Debug mode) on my Windows 7 installation. Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=\Device\HarddiskVolume1 description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {93f90e43-cae8-11df-b05a-c9177e705936} resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} displayorder {93f90e43-cae8-11df-b05a-c9177e705936} {93f90e3f-cae8-11df-b05a-c9177e705936} {93f90e46-cae8-11df-b05a-c9177e705936} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 10 displaybootmenu Yes Windows Boot Loader ------------------- identifier {93f90e43-cae8-11df-b05a-c9177e705936} device partition=\Device\HarddiskVolume3 path \Windows\system32\winload.exe description Windows Server 2008 R2 - Standard locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e44-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=\Device\HarddiskVolume3 systemroot \Windows resumeobject {93f90e42-cae8-11df-b05a-c9177e705936} nx OptOut Windows Boot Loader ------------------- identifier {93f90e3f-cae8-11df-b05a-c9177e705936} device partition=C: path \Windows\system32\winload.exe description Windows 7 - Professional locale nl-NL inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e40-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=C: systemroot \Windows resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} nx OptIn Real-mode Boot Sector --------------------- identifier {93f90e46-cae8-11df-b05a-c9177e705936} device partition=C: path \NST\AutoNeoGrub0.mbr description Ubuntu 10.04 - Lucid Lynx

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >