Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 569/719 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • Windows Server 2008 32 bit & windows 7 professional SP1

    - by Harry
    I'm testing my new Windows Server 2008 32 bit edition (2 servers) as a server and Windows 7 professional 32 bit as a client. Let say one is a primary domain controller (PDC) and the other is a backup domain controller (BDC) like the old time to ease. Every setup were done in the PDC and just replicate to BDC. Didn't setup anything, just install the server with AD, DNS, DHCP, that's all. Then I use my windows 7 pro 32 bit to join the domain. It worked. After that I tried to change the password of a the user (not administrator) but it always failed said it didn't meet the password complexity setup while in fact there's no setup at all either in account policy, default domain policy or even local policy. Tried to disable the password complexity in the default domain policy instead of didn't set all then test again but still failed. Browse and found suggestion to setup the minimum and maximum password age to 0 but it also failed. Tried to restart the server and the client then change password, still failed with the same error, didn't meet password complexity setup. Tried to see in the rsop.msc but didn't found anything. In fact, if I see the setup in another system with windows server 2003 and windows xp, using rsop.msc I can see there's setup for computer configuration windows settings security settings account policies password policy. I also have a windows 7 pro 32 bit in a windows server 2003 32 bit environment but unable to find the same setting using rsop but this windows 7 works fine. anyone can give suggestion what's the problem and what to do so I can change my windows 7 pro laptop password in a windows server 2008 environment? another thing, is it the right assumption that we can see all the policies setting in windows 7 whether it's in a windows server 2003 or 2008 environment? thanks.

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • MSSQL Server Management Studio Express 2005 license limit error should not be displayed

    - by Erik Vold
    I just tried restoring a 250MB database from a backup on my local machine, and got the following message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Restore failed for Server 'MULTIVIS-A0D9F3\SQLEXPRESS'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Restore+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: System.Data.SqlClient.SqlError: CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 4096 MB per database. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I had 3 db's on the machine, one ~4.1gb db and two other dbs < 10mb each. So I did some googling on this error and saw the suggestion to try shrinking my other dbs to free up some space. So I did so on the 4.1gb db and now when I go to 'properties' for that db it says it is taking/using ~2.4gb. So I should have space now I figure, but whenever I try to restore the ~250mb database now I still get the error message above.. I tried restarting as well but that wasn't helpful. Any idea what the issue is?

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • Configuring DNS & MX records for exchange 2010

    - by Mahmoud Saleh
    i am trying to configure Exchange Server 2010 on Windows Server 2008 R2 to receive emails from the internet following the danscourses tutorials: and i followed this video for the DNS & MX records: http://www.youtube.com/watch?v=jdf_3DRssks i don't have any windows administration skills, and i am stuck with the DNS configuration. and the following are my domain configuration i got from the hosting. following are the steps i made: 1- Add new name server: add ns1.centors.com ip Exchange Server Public IP: 41.233.26.131 2- Change the A record change it to point to the public ip address Exchange Server Public IP: 41.233.26.131 3- New cname record for www and make it resolve to centors.com 4- New mx record for mail.centors.com 5- New A record for mail.centors.com: name: mail ip: Exchange Server Public IP: 41.233.26.131 6- new A record for ns1: ip: Exchange Server Public IP: 41.233.26.131 7- i made port forward in the router for SMTP and POP3 to the exchange server local ip address. ISSUE: i have a user account in the active directory, and the user is member of the domain, the user is [email protected] and when trying to login with this account in outlook 2010 on other machine using following data: account type: POP3 incoming mail server: mail.centors.com outgoing mail server: mail.centors.com i always get the error: Authorization failed, check your server settings. please advise what's wrong with the configuration, thanks in advance.

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Dynamically updating DNS records with NSUPDATE fails

    - by Thuy
    I've got my own nameserver ns3.epnddns.com and domain epnddns.com I wanted to try and update the records dynamiclly from home using nsupdate but when I run nsupdate -k Kwww.epnddns.com.+157+17183.key i get the following errors Kwww.epnddns.com.+157+17183.key:1: unknown option 'www.epnddns.com.' Kwww.epnddns.com.+157+17183.key:2: unexpected token near end of the file Kwww.epnddns.com.+157+17183.{private,key}: unexpected token Not sure why I get these errors, I'll post my complete setup. Generated keys on my home pc, using dnssec-keygen -a HMAC-MD5 -b 128 -n HOST www.epnddns.com. created /var/named/ and put the keys there and chmod them to 600. transfered the keys to my nameserver ns3.epnddns.com, created /var/named/ ,put the keys there and chmod them to 600 made dnskey.conf in /var/named and added key www.epnddns.com. { algorithm hmac-md5; secret "my secret from they keys=="; }; chmod to 600 then in /etc/bind/named.conf.local include "/var/named/dnskeys.conf"; zone "epnddns.com" { type master; file "/etc/bind/zones/epnddns.com.zone"; allow-transfer { myhomeip; }; //its my home ip so not in the same network allow-update { key www.epnddns.com.; }; }; I restarted bind without any error messages so it seems to be working on the nameserver at least. But on my homepc when i try and run the nsupdate i get those error messages. Thanks in advance for any help or insightful advice.

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Slow transfer speed between two servers

    - by Linux Guy
    I have two servers both network cards speed is 10Gbps The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.1 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.1, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.0 port 38895 connected with 0.0.0.1 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service the connection will be ok , after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Apache subdomain not working

    - by tandu
    I'm running apache on my local machine and I'm trying to create a subdomain, but it's not working. Here is what I have (stripped down): <VirtualHost *:80> DocumentRoot /var/www/one ServerName one.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /var/www/two ServerName two.localhost </VirtualHost> I recently added one. The two entry has been around for a while, and it still works fine (displays the webpage when I go to two.localhost). In fact, I copied the entire two.localhost entry and simply changed two to one, but it's not working. I have tried each of the following: * `apachectl -k graceful` * `apachectl -k restart` * `/etc/init.d/apache2 restart` * `/etc/init.d/apache2 stop && !#:0 start` Apache will complain if /var/www/one does not exist, so I know it's doing something, but when I visit one.localhost in my browser, the browser complains that nothing is there. I put an index.html file there and also tried going to one.localhost/index.html directly, and the browser still won't fine it. This is very perplexing since the entry I copied from two.localhost is exactly the same .. not only that, but if something were wrong I would expect to get a 500 rather than the browser not being able to find anything. The error_log also has nothing extra.

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • Apache Alias Isn't In Directory Listing

    - by Phunt
    I've got a site running on my home server that's just a front end for me to grab files remotely. There's no pages, just a directory listing (Options Indexes...). I wanted to add a link to a directory outside of the webroot so I made an alias. After a minute of dealing with permissions, I can now navigate to the directory by typing the URL into the browser, but the directory isn't listed in the root index. Is there a way to do this without creating a symlink in the root? Server: Ubuntu 11.04, Apache 2.2.19 Relevant vhost: <VirtualHost *:80> ServerName some.url.net DocumentRoot "/var/www/some.url.net" <Directory /var/www/some.url.net> Options Indexes FollowSymLinks AllowOverride None Order Allow,Deny Allow From All AuthType Basic AuthName "TPS Reports" AuthUserFile /usr/local/apache2/passwd/some.url.net Require user user1 user2 </Directory> Alias /some_alias "/media/usb_drive/extra files" <Directory "/media/usb_drive/extra files"> Options Indexes FollowSymLinks Order Allow,Deny Allow From All </Directory> </VirtualHost>

    Read the article

  • Can't ping a DNS zone on windows server 2008 R2

    - by Roberto Fernandes
    I´ve just configured a windows server 2008 r2, but got a lot of problems on DNS role. Let me talk about the server configuration: name: fdserver IP address: 192.168.0.10 I have a DNS zone called "fd.local". This is my domain and it´s working ok. I´ve created a zone called fdserver, and inside this zone a record (A) with "*" as a host. because this is a webserver, i´ve configured apache so if you enter something like "site.fdserver" it will point you to the "site" folder. This is working ok ONLY inside the server. This server is a DNS server too... and have 3 entries: 192.168.0.10 (his own IP), 8.8.8.8 and 8.8.4.4 (google public DNS). Now start the problems... Most of the computers on my network, CAN join the domain without problems. But just CAN'T ping "something.fdserver". Now comes the strange thing... If I remove the twoo secondary entries on my DNS server (8.8.8.8 and 8.8.4.4), it obvious stop accessing websites (like microsoft.com), but now the computer CAN ping "something.fdserver". I don´t know If I explained correctly... and my English is terrible... but inside the server is all working as it supposed to work. But in the workstation machines, it work only if I remove the secondary DNS!! If you need any details, just ask! thanks!

    Read the article

  • How to edit files directly on webdav in windows.

    - by phazei
    I have a webDAV setup with the cPanel webdisk. I can connect to it through NetHood and I can drag and drop files to/from there. What I can't do is simply edit any of the files directly. I need to copy it somewhere else, edit it, then copy it back. That's essentially what is needed with ftp, though smart clients will monitor the file, making it easier than webDAV in the current state I'm using it in. I was under the impression that webdav was supposed to let me work on the files as if it were a local drive. But nothing can actually open the files. How can I go about bringing more functionality around to it? Or is this as good as it gets? I have tried 'net use q:\ https://myserver.com:2078' and 'net use q:\ '\myserver.com@SSL:2078\' but neither work and only throws: "System error 67 has occurred. The network name cannot be found." I ultimately want to use TortiseSVN with the webDAV so I can have my working copy running on the server.

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Windows 7 wireless not seeing any networks

    - by jkohlhepp
    I think I have managed to confuse Windows 7. When I did the install, I had the network cable plugged in to my router, but the wireless card was also enabled. During the install, Windows 7 seemed to see my wireless network and even asked me for the WEP key. I know that it used the WEP key because I initially entered an invalid one and it gave me an error. Then the network said "SoAndSoWireless Connected". However, when I unplug or disable my wired network card, then I have no internet, and it can't see any networks. When I plug in the wired network card, it says "SoAndSoWireless Connected". Under Network and Internet Network Connections I have "Local Area Connection" and "Wireless Network Connection". The wired one's status is "SoAndSoWireless" and the wireless status is "Not Connected". Also, the wireless connection can't seem to see any other wireless networks in the area and I know there are tons. My neighbors have several. I've somehow seemingly confused Windows 7 into thinking that my wired network card is my wireless card or something. Any ideas on how to un-confuse it? This is a desktop machine by the way, if it matters. EDIT: Ah, I think part of the problem is that I named my network accidentally the same as the name of the wireless network being broadcast by the wireless router. So that might be why it says that name on the hard-wired connection. Perhaps the drivers just are completely not working for the wireless card. Thanks, ~ Justin

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >