Search Results

Search found 33029 results on 1322 pages for 'access vba'.

Page 519/1322 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • The Wifi is working fine but no internet connection in Windows 8 [migrated]

    - by Ali
    I'm currently having a Problem with my msi GT70 laptop. my laptop is running windows 8 and yesterday it requested a Restart to Update. after the Restart and the update I tried to surf the net through google chrome, the WiFi connection is perfect but the page I tried to access is not loading at all, after a while it shows failure to load page. I disabled and reenabled the WiFi chip through the device manager but still no internet connection. I uninstalled and reinstalled the drivers, still no internet. I updated the Driver of the wifi, still no internet, I even went to Wifi configuration and tried to change the DNS and reset it back to automatically, still no Luck.. I'm really lost I don't know what to do, I don't want to go deep and play with Laptop DNA "aka: registry file" and screw things up. I really appreciate any help in this matter. thanks in advance. Note: I tried to access many pages but no luck. I even tried Firefox, Opera, even ie still no luck. The internet is working fine on my tablet and cellphone, except for the Laptop.

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • 500 Error when logining into subdomain using codeigniter

    - by itsdanprice
    I have a website that has been setup and working fine for ages. It's built using Code Igniter. It's run using .htaccess files to restrict access and hide urls. All fine. Until a couple of days ago when we try to access http://admin.dealersupport.co.uk we get a 500 error (this is the back end of the site, held in a seperate subdomain.) Nothing else has changed on the server. I have tried restoring from a back up from when I know it was working. The problem persists. The only thing I can think of is that we recently upgraded to Plesk 11.0.9 and since then we have been seeing some Apache instabilities. The only thing that is thrown up by the error logs is this: Wed Nov 21 08:40:17 2012] [error] [client 94.31.24.129] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/vhosts /dealersupport.co.uk/admin/index.pl, referer: http://admin.dealersupport.co.uk/login I have now added this to my .htaccess files Options +FollowSymLinks +SymLinksIfOwnerMatch RewriteEngine On And that seems to have eliminated that error from the error logs, but we are still getting a 500 error when we have logged into the backend. Can anyone help?

    Read the article

  • Unable to Connect to just Google Servers

    - by Akshat Mittal
    I am in an extremely strange problem. I am unable to connect to just Google Servers. I am not able to access any site related to Google, Google.com | YouTube | Google+ | Webmaster Tools | jQuery CDN, nothing is working. I am able to open any other website (as I am posting this question on SuperUser), even the Google DNS (8.8.8.8 and 8.8.4.4) are offline. Please Help!! Update 1: Google DNS are back online, YouTube is back online. But website on domain google.com is not working (ex: play.google.com, maps.google.com, google.com/search, etc). Update 2: I am able to access Google.com (only) with (one of) its IP addresss(s) listed below: 74.125.227.41 74.125.227.46 74.125.227.32 74.125.227.33 74.125.227.34 74.125.227.35 74.125.227.36 74.125.227.37 74.125.227.38 74.125.227.39 74.125.227.40 Update 3: I consulted my friends nearby and they said that they are also experiencing the same problem. Seams like this is a major problem in this area (or India !!) The Problem is Now Solved!! I am able to open Google.com

    Read the article

  • IIS 6 Windows Authentication in ASP.Net app fails

    - by Kjensen
    I am trying to install an ASP.Net app on an IIS6 webserver. The site requires the user to authenticate with windows, and this works on several other apps on the same server. In IIS I have enabled anonymous access and windows authentication. In web.config, authentication is set to: <authentication mode="Windows"/> and authorization...: <authorization> <allow roles="Users"/> <deny users="*"/> </authorization> Ie. allow all users in role "Users" and deny everybody else. This is the approach that is working with several other apps on the same server. If I run the site, I am prompted for username and password. If I remove the line: <deny users="*"/> I can access the site and everything works - but the user credentials are not passed to the site (Page.User.Identity.Name returns a blank string in ASP.Net). The site has identical (inherited) file permissions as other working sites on the server. The only difference in authentication/authorization between this site and the other working sites is, that this runs Asp.Net 4 (but there are other working asp.net 4 sites on the server as well). What am I missing here? Where should I look?

    Read the article

  • DDoS nulling to some ips and other options?

    - by Prix
    I am looking for some information in regards DDoS in the follow scenario: I have a server that is behind a Cisco Guard and it will be DDoS'ed, I only care about a set list of IPS that by not means are the attackers. Is it possible to null all other ips but this list to actually get any response to my server or in the long run no matter what I do if they have enough DDoS power I will just go down like a flie ? Is there any recommended company out there that can actually cope with a DDoS ? My server will mainly run several clients that will get connected to a external server and all it needs access to is my local MySQL the the private network so I can access it. There will be no other services runnings such as web or ftp etc at least not to the external ips of the server if i ever have to have any of these service they will be on the private network. The MySQL will be available externally only to 1 safe ip not known by anyone but me and internally at localhost + private network. Are there any solutions ?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Work from home on an iPad?

    - by Alex Basson
    The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic. Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things: Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop. Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway). I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files? Thanks for any suggestions!

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Inactive users in windows server after some time according to first login instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Windows Server users so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after first login. The user might not need his account right when purchases the license. In another words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. CLARIFICATION (Update after MDMarra's comment) Working environment is Windows Server 2012. By the word 'user', I mean Native Windows Server Users. Whenever a new person purchases a license with me, I create them manually using net user command like this: net user ali pass /add /expires:2013-12-25

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • MacBook Air Keeps dropping Wi-Fi

    - by Robert Patrick
    So my MacBook Air keeps dropping Wi-Fi for some reason. It happens ONLY on my home network, and ONLY to my computer. I'm using a Linksys WRT54G router. I'm the only Mac on the network. Every other Wi-Fi network is perfectly fine, and every other computer on this network is fine. Many things can happen. It could say it's connected, but not be able to access the internet (whether it tells me that there's no internet access or not). It may just drop Wi-Fi altogether, and refuse to connect. Generally, if I unplug the router and plug it back in, it's all good. It also works if I restart my computer. This happens multiple times a day. Yesterday I did everything I know to get it to connect (restart router many times, restart my MacBook), and nothing worked. Eventually it just magically worked. How can I stop this from happening? We got a notice from Comcast a while ago saying that a bot called DNS Changer was detected on one or more machines on the network. I'm assuming that this can't be me, right?

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • Keyboard doesn't function after Fedora 17 update

    - by mickburkejnr
    I updated my Fedora 16 installation to Fedora 17 on Saturday, and the update worked without reporting any errors. I carried on working on the machine in question and then switched the computer off. Last night I went back on the computer, I switched it on and got to the log in screen. At this point I tried to type in my password but the keyboard wouldn't work. I unplugged it (it's a PS/2 keyboard) and plugged it back in. The lights flashed for a split second but the keyboard still wouldn't work. I then plugged the keybaord in to a USB to PS/2 adapter, and the keyboard still wouldn't work. I restarted the computer and tried to access the BIOS and I was able to do so. So the keyboard doesn't seem to be faulty, it just doesn't work when Fedora boots in to the GUI interface. I did try to boot in to the "recovery mode" of Fedora, and the keyboard works here with no problem. As I still have access to Fedora via a terminal interface, is there anything I can do to fix the keyboard problem via the terminal without having to reinstall Fedora?

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • Encrypted WiFi with no password?

    - by Ian Boyd
    Is there any standard that allows a WiFi connection to be encrypted, but not require a password? i know that (old, weak) WEP, and newer WPA/WPA2 require a password (i.e. shared secret). Meanwhile my own wireless connections are "open", and therefore unencrypted. There is no technical reason why i can't have an encrypted link that doesn't require the user to enter any password. Such technology exists today (see public key encryption and HTTPS). But does such a standard exist for WiFi? Note: i only want to protect communications, not limit internet access. i get the sense that no such standard exists (since i'm pretty capable with Google), but i'd like it confirmed. Claraification: i want to protect communcations, not limit internet access. That means users are not required to have a password (or its moral equivalent). This means users are not required: to know a password to know a passphrase to enter a CAPTCHA to draw a secret to have a key fob to know a PIN to use a pre-shared key have a pre-shared file to possess a certificate In other words: it has the same accessibility as before, but is now encrypted.

    Read the article

  • can't get to admin page after factory reset netgear wg602

    - by stefanB
    I have wireless Netgear wg602 on my home network (connected to my internet modem/router). I've had it secured and locked down to only accept connection from specific MAC addresses. I've forgotten the password that I used but my Mac Book laptops can still connect (multiple OS updates - it can't retrieve and display the password but it can use it to log in to WPA) so I want to reconfigure it from scratch (have some new devices). I tried to reset the Netgear wg602 to factory settings (pressed reset button for 10 sec), reset my laptop IP address to local address suggested in manual (192.168.0.210 net mask 255.255.255.0), connect Netgear via ethernet cable to my mac book pro but I can't get to the admin page at 192.168.0.227 as suggested by manual (firefox or safari). At this stage the Netgear is not connected to router, it is only connected to mac book. I can't ping the wireless access point either (but it is on all lights are on). What am I doing incorrectly? Last time I configured it via Windows now I only have Mac Book (which I've used with the wireless access point for 2 years so no compatibility problems).

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >