Search Results

Search found 5762 results on 231 pages for 'clients'.

Page 210/231 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • OpenWRT based gateway with dnsmasq and internal server with bind

    - by Peter
    I have router based on OpenWRT which has dnsmasq 2.59. Inside my local area network I have a NS server bind. This server has internal and external views for a couple of my domains. My router forwards port 53 TCP and UDP from outside IP (router WAN) to this server. For the external clients everything works fine. In order to organize the internal view, I decided to add the exception to /etc/dnsmasq.conf server=/mydomain1.com/192.168.1.1 server=/mydomain2.com/192.168.1.1 server=/mydomain3.com/192.168.1.1 (192.168.1.1 - IP address of the NS server) According to dnsmasq manstrong text: More specific domains take precendence over less specific domains, so: --server=/google.com/1.2.3.4 --server=/www.google.com/2.3.4.5 will send queries for *.google.com to 1.2.3.4, except *www.google.com, which will go to 2.3.4.5 this domain name with all the sub-domains is supposed to be forward to my NS server. Everything works (SOA, NS, MX, CNAME, TXT, SRV etc.) except for A-record: # nslookup -type=a mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 *** Can't find mydomain1.com: No answer 192.168.1.100 - IP address of my router (dnsmasq) However, I can get the answer for the TXT-record query: # nslookup -type=txt mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 mydomain1.com text = "v=spf1 include:mydomain1.com -all" When I just specify the local IP of my NS server (direct access to the server without using dnsmasq) then the results are: # nslookup -type=a mydomain1.com 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: mydomain1.com Address: 192.168.1.1 There is a similar situation with the MX-record: C:\>nslookup -type=mx mydomain1.com Server: router.lan Address: 192.168.1.100 mydomain1.com MX preference = 10, mail exchanger = mail.mydomain1.com mydomain1.com nameserver = ns.mydomain1.com mail.mydomain1.com internet address = 192.168.1.1 ns.mydomain1.com internet address = 192.168.1.1 C:\>nslookup -type=a mail.mydomain1.com Server: router.lan Address: 192.168.1.100 *** No address (A) records available for mail.mydomain1.com This is a dig result: # dig +nocmd mydomain1.com any +multiline +noall +answer mydomain1.com. 86400 IN SOA ns.mydomain1.com. hostmaster.mydomain1.com. ( 121204007 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 604800 ; expire (1 week) 3600 ; minimum (1 hour) ) mydomain1.com. 86400 IN NS ns.mydomain1.com. mydomain1.com. 86400 IN A 192.168.1.1 mydomain1.com. 604800 IN MX 10 mail.mydomain1.com. mydomain1.com. 3600 IN TXT "v=spf1 include:mydomain1.com -all" When I try to ping: # ping mydomain1.com ping: cannot resolve mydomain1.com: Unknown host Is it a bug of dnsmasq 2.59? How to manage this problem?

    Read the article

  • SQL 2008 R2 Named Instance Client Connectivity Issues?

    - by Jerry Dodge
    We're upgrading our software from using SQL 2000 to 2008 R2. Our customers will be installing an update which uninstalls 2000 and installs 2008 R2 under the same instance. So if no instance existed, then no instance name will be set (default). However, the problem starts with the customers which have a named SQL instance. Starting in 2008 R2 (not sure of ones before), for some reason, a client connecting to the server by its instance name is unsuccessful. I'm testing from the Management Studio - if I can't connect this, then nothing can connect. I browse network servers, and find the specific server\instance in the list. But, upon trying to connect to an instance name like MyServer\INST, I get: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) I do in fact have TCP/IP and Named Pipes protocols enabled, this is the first thing I did. When I connect to the server using a comma (,) and port number like MyServer, 49195, it works just fine. So it appears that client computers are just unable to identify the instance names. This has happened on all our installations of SQL 2008 R2 and from all client computers, including Win 7, XP, Vista, Server 2008, and Server 2003. We never experienced such issues on earlier versions of SQL. The problem even persists if the firewalls and antiviruses are all disabled. Now, this is a large update which we will be distributing soon to all our customers, and we want to minimize the interaction they need with us to get this installed. We absolutely hate the idea of using a port number, because it will always be different, and we would have to modify each client to point to this server/port. Some of our customers may have hundreds of client computers. How do I make client connections to a named SQL instance work again? After all, this is the whole purpose of named instances, and if a client can't connect to this instance by its name, then what is it even named for? EDIT It was mentioned to make sure SQL Browser is running, so I checked, and it is running. The server is also able to connect to its self (locally) - just external connections are refused. UPDATE After more careful checking, I learned the firewall wasn't completely disabled when testing, and upon disabling it completely, this works. So it appears that SQL Browser is being blocked by the firewall from external clients from accessing.

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

  • Erratic DNS name resolution

    - by alex
    Hi all, We have a client we host a web for (blog.foobar.es). We do not manage foobar.es's DNS setup, we just told them to point blog.foobar.es to our web server's IP. We have noticed that sometimes we cannot browse to blog.foobar.es, but we can browse to other sites on that server. Troubleshooting a bit using host(1) yields something funny: $ host blog.foobar.es 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: Host blog.foobar.es not found: 3(NXDOMAIN) , being 8.8.8.8 one of Google's public DNS servers. However, sometimes the same server resolves the name correctly (!). Another funny thing, is that our ISP's DNS servers sometimes say: $ host blog.foobar.es 80.58.61.250 Using domain server: Name: 80.58.61.250 Address: 80.58.61.250#53 Aliases: blog.foobar.es has address x.x.x.x Host blog.foobar.es not found: 3(NXDOMAIN) Which I don't really understand. I've dug around using dig(1), and have noticed they've set up a SOA record for foobar.es: $ dig foobar.es ; <<>> DiG 9.7.0-P1 <<>> foobar.es ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59824 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;foobar.es. IN A ;; AUTHORITY SECTION: foobar.es. 86400 IN SOA dns1.provider.es. root.dns1.provider.es. 2011030301 86400 7200 2592000 172800 ;; Query time: 78 msec ;; SERVER: 80.58.61.250#53(80.58.61.250) ;; WHEN: Thu Mar 3 16:16:19 2011 ;; MSG SIZE rcvd: 78 ... which I'm completely unfamiliar with. Ideas? We can't really do much as we do not control DNS, but we'd like to point our clients in the right direction...

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Cisco ASA SSL VPN options?

    - by JonH
    Disclaimer: I am not a network admin so I may be wrong here but I thought asking here would help. I'm a developer mainly on the .net framework as well as helping get a mobile intranet app working. Because this app is only allowed to be used on our network I can easily run this app on our wireless network connection within our building. All is fine and dandy but we'd also like to be able to run this mobile app at say a customer plant using VPN software. I thought surely this could be easy as we exclusively use Samsung s4 phones so I thought I'd download Cisco's Samsung any connect software to allow us to VPN...its right on the play store. Sure enough it doesn't work. I mention it to our network admin who says not possible since we have old technology that doesn't support SSL. He mentions we'd have to upgrade all of our hardware, the firewall, etc. to get this to work. We really need VPN on our phones not only for this app but other internal apps, etc. He did mention the following: We can’t upgrade the software on our ASA, because we don’t have enough memory for the new version.  (the asa is very old).  We can’t add more memory, so we would have to get a new firewall, which I have been told I cannot do. In addition he also mentioned: The Samsung AnyConnect client uses SSL to connect.  With the current (old) version of software that our firewall is running, the SSL connections are unreliable.  We need different hardware in order to upgrade our firewall, which we are unable to attain at this time.  This is the same reason that Windows 8 clients are not able to connect. I am curious hence me asking. vpns seem to be fairly simple to setup. What other options do I have aside from making this a public site or web service that consumes this data over the internet as this is a complete no no. What can we do to make this work without that much effort or cost.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • SSH to an ubuntu machine using avahi

    - by tensaiji
    I have an ubuntu box that I connect to using avahi. Connecting to that box works fine for all services (I regularly use AFP, SSH and SMB on it) but I've noticed that whenever I connect to it from a mac using SSH (and using the ".local" dns name provided by avahi - eg. "ssh .local") SSH tries to connect using ipv6, which for some reason times out (after two minutes) then it tries ipv4 which connects immediately. I'd like to avoid this timeout, as it's really annoying for me and other users - if SSH tried ipv4 first or if ssh over ipv6 worked then that would solve the problem. But so far I've been unable to get either to work (the best I've managed is to specify the "-4" option to SSH to stop it from trying ipv6 at all). I'm using Ubuntu 10.04. Any solution has to be on the server (not the client) as there are multiple clients connecting. A possible complication might be that my LAN is set up to allow link-local ipv6 addresses only, but I have other servers (using Mac OS) that I can SSH into using ipv6) I suspect that the problem could be solved by either preventing avahi from broadcasting the ipv6 address, or by enabling ssh over ipv6, but so far as I can tell avahi is already configured not to broadcast the ipv6 address and sshd is configured to allow ipv6 connections! Here's my /etc/avahi/avahi-daemon.conf (I don't think I've changed anything from the ubuntu defaults) [server] #host-name=foo #domain-name=local #browse-domains=0pointer.de, zeroconf.org use-ipv4=yes use-ipv6=no #allow-interfaces=eth0 #deny-interfaces=eth1 #check-response-ttl=no #use-iff-running=no #enable-dbus=yes #disallow-other-stacks=no #allow-point-to-point=no [wide-area] enable-wide-area=yes [publish] #disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no #publish-addresses=yes #publish-hinfo=yes #publish-workstation=yes #publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 #publish-resolv-conf-dns-servers=yes #publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] #enable-reflector=no #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=300 rlimit-stack=4194304 rlimit-nproc=3 and here's my sshd_config (mainly updated to only allow pub/private keys): # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 180 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no AllowGroups sshusers # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Does anyone have any ideas that I can try, or has experienced anything similar?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • How to resolve `bootpd` crashing constantly on Mac OS X 10.6.4 Snow Leopard Server?

    - by morgant
    I've got a Mac Pro running Mac OS X 10.6.4 Snow Leopard Server and it's recently started getting numerous 'kNetworkError's in Server Admin.app when viewing services. It's acting as a gateway w/NAT and has been so for quite some time. There is one glaring issue, bootpd crashes all the time with the following errors in `/var/log/system.log/: Aug 12 16:54:59 servername bootpd[3572]: server starting Aug 12 16:54:59 servername bootpd[3572]: server name servername.domain.tld Aug 12 16:54:59 servername bootpd[3572]: interface en0: ip 10.0.1.9 mask 255.255.255.0 Aug 12 16:54:59 servername bootpd[3572]: bsdpd: re-reading configuration Aug 12 16:54:59 servername bootpd[3572]: bsdpd: shadow file size will be set to 48 megabytes Aug 12 16:54:59 servername bootpd[3572]: bsdpd: age time 00:15:00 Aug 12 16:54:59 servername bootpd[3572]: [3572] detected buffer overflow Aug 12 16:54:59 servername com.apple.launchd[1] (com.apple.bootpd[3572]): Job appears to have crashed: Abort trap Aug 12 16:54:59 servername com.apple.ReportCrash.Root[3571]: 2010-08-12 16:54:59.828 ReportCrash[3571:2807] Saved crash report for bootpd[3572] version ??? (???) to /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash It is correctly configured to serve DHCP through en1 (not en0), the "LAN" port. This happens even with no hardware (even switches) connected to the "LAN" port. There are no DHCP clients listed. Oddly, the "Overview" shows 1 static map, but nothing is listed under "Static Maps" and there are no "Computers" in Open Directory. /var/db/dhcp_leases is empty. /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash is as follows: Process: bootpd [3572] Path: /usr/libexec/bootpd Identifier: bootpd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2010-08-12 16:54:59.713 -0400 OS Version: Mac OS X Server 10.6.4 (10F569) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: __abort() called Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff803c13d6 __kill + 10 1 libSystem.B.dylib 0x00007fff80461913 __abort + 103 2 libSystem.B.dylib 0x00007fff80456157 mach_msg_receive + 0 3 libSystem.B.dylib 0x00007fff803b92cf __strncpy_chk + 14 4 bootpd 0x0000000100014e5d PLCache_read + 782 5 bootpd 0x0000000100004a3d BSDPClients_init + 68 6 bootpd 0x00000001000053b5 bsdp_init + 2396 7 bootpd 0x000000010000200b S_update_services + 1228 8 bootpd 0x0000000100002344 S_server_loop + 571 9 bootpd 0x0000000100003963 main + 1766 10 bootpd 0x0000000100000984 start + 52 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff5fbfe220 rcx: 0x00007fff5fbfe218 rdx: 0x0000000000000000 rdi: 0x0000000000000df4 rsi: 0x0000000000000006 rbp: 0x00007fff5fbfe240 rsp: 0x00007fff5fbfe218 r8: 0x0000000000000001 r9: 0x0000000100114280 r10: 0x00007fff803bd412 r11: 0xffffff80002e1680 r12: 0xffffffffffffffff r13: 0x00007fff5fbfe330 r14: 0x00007fff5fbfe33b r15: 0x00007fff7009bec0 rip: 0x00007fff803c13d6 rfl: 0x0000000000000202 cr2: 0x000000010004c000 Any thoughts or suggestions as to resolving this?

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • Webserver Responses Hanging

    - by drscroogemcduck
    From some networks requesting certain images on our webserver is very flakey. I've looked at tcpdumps on both sides and the server sends back part of the file and the client ACKs the TCP packet but the server never receives the ACK. The servers view: 41 19.941136 212.169.34.114 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 42 19.941136 209.20.73.85 212.169.34.114 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 46 20.041142 212.169.34.114 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 47 20.045142 212.169.34.114 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 48 20.045142 209.20.73.85 212.169.34.114 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 49 20.045142 209.20.73.85 212.169.34.114 TCP [TCP segment of a reassembled PDU] (Part of the content of the image 2720 bytes. i assume it is reassembled in tcpdump and it is fragmented over the wire.) ** never receives the ACK sent in frame 282 and will eventually resend the tcp segment ** The clients view: 274 26.161773 10.0.16.67 209.20.73.85 TCP 52456 > http [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=2 276 26.262867 209.20.73.85 10.0.16.67 TCP http > 52456 [SYN, ACK] Seq=0 Ack=1 Win=5440 Len=0 MSS=1360 277 26.263255 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=1 Ack=1 Win=65280 Len=0 278 26.265193 10.0.16.67 209.20.73.85 HTTP GET /map/map/s+74-WBkWk0aR28Yy-YjXA== HTTP/1.1 279 26.365562 209.20.73.85 10.0.16.67 TCP http > 52456 [ACK] Seq=1 Ack=522 Win=6432 Len=0 280 26.368002 209.20.73.85 10.0.16.67 TCP [TCP segment of a reassembled PDU] (Part of the content of the image. Only 1400 bytes.) 282 26.571380 10.0.16.67 209.20.73.85 TCP 52456 > http [ACK] Seq=522 Ack=1361 Win=65280 Len=0 The network we are having trouble with is NATd. Is there any kind of explanation for this weirdness?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >