Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 92/1664 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

  • How do I resolve the message "Validating WSFC quorum vote configuration - Action Required."

    - by Rob Boek
    I have a 3 node AlwaysOn Availability Group on a 3 node WSFC using node majority. 2 nodes are setup as synchronous with automatic fail-over, the 3rd is setup as asynchronous with manual fail-over. When I try to fail-over using the GUI, I get a warning as shown in the screenshot. There is no warning or error if I fail-over with T-SQL. Adding a file share to the quorum doesn't help. The only way I can resolve the warning is to remove the asynchronous sql instance from the 3rd node (it remains part of the WSFC). Either way, the AlwaysOn dashboard says quorum is OK. Am I missing something? Is this a bug in the GUI that I should just ignore? Clicking "Action Required" gives the following error:

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • Remote Desktop Connection - Connection Failed

    - by NLV
    Let me explain the problem. My system is connected to a network and 'was' having XP installed in it. Recently i formatted the system and installed windows server 2003 and added the machine to the network. Everything is working fine like mapping the network drives, pinging the machines etc. But i've the following problems. I'm not able to do a remote desktop connection to another system in the network. Some systems in the network is able to do a remote desktop to my machine. But not all. If i host any web service in my system i'm not able to connect it from any other machine in the network. I've already configured the Remote Desktop to accept connections. Any ideas? NLV

    Read the article

  • VMWare server host agent service won't start

    - by Bimo Arioseno
    I'm using VMWare Server 2.0 on Windows Server 2003 R2. Sometimes after restarting the host machine, the VMWare host agent service won't start due to an error. This is the error messages from Event Viewer: [Service control manager] Timeout (30000 milliseconds) waiting for the VMware Host Agent service to connect. [Service control manager] The VMware Host Agent service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. I've set the service to automatically restart after subsequent failure using services.msc (using a 10 min. delay), but it still won't start. Only starting the service manually seems to work. Has anyone experienced this before? What workarounds or fixes are there?

    Read the article

  • FreeBSD 8.1 64bit logrotate - ELF interpreter /libexec/ld-elf-so.1 not found

    - by Richard Knop
    I am trying to get logrotate running on a FreeBSD 8.1 virtual machine. I installed the logrotate with pkg_add, I have created the logrotate.config file and also run: mkdir /var/lib/ touch /var/lib/logrotate.status Now when I do: /usr/local/sbin/logrotate -d /usr/local/etc/logrotate.conf I get this error: ELF interpreter /libexec/ld-elf-so.1 not found Abort The file ld-elf-so.1 exists: locate ld-elf.so.1 /libexec/ld-elf.so.1 /usr/libexec/ld-elf.so.1 /usr/share/man/man1/ld-elf.so.1.1.gz

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • IIS7 dynamic_compression_not_success Reason 12

    - by Peter Oehlert
    So, I'm a bit of an IIS7 n00b but I've used most of the old IIS systems going back to 3. I'm trying to turn on dynamic compression and it's working, mostly. It doesn't work for my ADO.Net Data Service (Astoria) requests, batched or not. I found the freb tracing which was really helpful. And what I come up with unbatched requests is that it returns Reason Code 12, NO_MATCHING_CONTENT_TYPE. OK, so I don't have the matching mime type specified, that's easy. Except this is what I have in my web.config (which I think is correct, but maybe not). <httpCompression dynamicCompressionDisableCpuUsage="100" dynamicCompressionEnableCpuUsage="100" noCompressionForHttp10="false" noCompressionForProxies="false" noCompressionForRange="false" sendCacheHeaders="true" staticCompressionDisableCpuUsage="100" staticCompressionEnableCpuUsage="100"> <dynamicTypes> <clear/> <add mimeType="*/*" enabled="true" /> </dynamicTypes> <staticTypes> <clear/> <add mimeType="*/*" enabled="true" /> </staticTypes> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="false" /> Now I think that this means it should compress any request that includes the Accept:Gzip header. I'd love to know what others might think here. My fiddler trace: GET /SecurityDataService.svc/GetCurrentAccount HTTP/1.1 Accept-Charset: UTF-8 Accept-Language: en-us dataserviceversion: 1.0;Silverlight Accept: application/atom+xml,application/xml maxdataserviceversion: 1.0;Silverlight Referer: http://sdev03/apptestpage.aspx Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; InfoPath.2; .NET CLR 3.0.30729; OfficeLiveConnector.1.4; OfficeLivePatch.1.3) Host: sdev03 Connection: Keep-Alive Cookie: .ASPXAUTH=<snip> HTTP/1.1 200 OK Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Server: Microsoft-IIS/7.0 DataServiceVersion: 1.0; X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 22 Mar 2010 22:29:06 GMT Content-Length: 2726 <?xml version="1.0" encoding="utf-8" standalone="yes"?> *** <snip> removed ***

    Read the article

  • Disable static content caching in IIS 7

    - by Lee Richardson
    I'm a developer having what should be a relatively simple problem in IIS 7 on Windows Server 2008 R2. The problem is that IIS 7 is overzealously caching all static content on the server. It's caching all .html and .js content and not noticing when the content changes on disk unless I iisreset. I've tried the following: Deleting the local cache in my browser (I'm 99% positive this is a server caching issue) In IIS Admin in OutputCaching adding an .html extension and unchecking "User mode caching" and unchecking "Kernel-mode caching" In IIS Admin in OutputCaching adding an .html extension and checking "User mode caching" and selecting the radio for "Prevent all caching" In IIS Admin editing Output Cache Feature settings and unchecking "Enable cache" and "Enable kernel cache under OutputCaching. Running "C:\Windows\System32\inetsrv\config\appcmd set config "SharePoint - 80" -section: system.webServer/caching -enabled:false" Looking through applicationHost.config and disabling anything related to caching I could find. Nothing seems to work. I'm getting very frustrated. Can anyone please help?

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • Repairing yum's repositories on a RHEL5.

    - by The Rook
    I am using RHEL5 and yum is missing many packages, such as apache, php, and all php libraries . I have added the rpmforge repository, but i am still missing these packages. This is an i686 machine and there might not be many i686 packages available, I think that if i force an i386 i'll have serious problems. How do I make sure I have a large number of compatible packages on a RHEL5 system? I didn't install this system, is it normal for RHEL5 to have virtually no useful packages in yum? How do RHEL5 administrators use yum without introducing conflicts with currently installed packages? Should I ditch yum and use apt? Thanks!

    Read the article

  • php-cgi memory usage higher than php's memory limit

    - by Josh Nankin
    I'm running apache with a worker MPM and php with fastcgi. the following are my mpm limits: StartServers 5 MinSpareThreads 5 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 10 MaxRequestsPerChild 2000 I've also setup my php-cgi with the following: PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=500 I'm noticing that my average php-cgi process is using around 200+mb of RAM, even as soon as they are started. However, my php memory_limit is only 128M. How is this possible, and what can I do to lower the php-cgi memory consumption?

    Read the article

  • Active RDP session over VPN getting disconnected

    - by Wandering Penguin
    I am having seemingly random disconnects of active RDP sessions (I am actively typing or otherwise interacting with the desktop) when connected over the VPN connection. The attempted to reconnect 1/20 pops up and proceeds all the way through 20 then drops. Once the session drops I can open a new session and connect again. This started happening about a week ago, The VPN connection is an IPSec VPN connection from a SonicWall NSA 2400. The NIC drivers are up to date. The VPN client is up to date. The firmware on the SonicWall is up to date (both regular and the early-release versions work the same). I have attempted to connect over three ISPs all with the same behavior. Two different workstations were used to test the VPN connection. The same behavior occurs when connecting to a domain workstation or server. If I am within the firewall I can connect to the same workstations and servers with the disconnect. The VPN connection has "enable fragmented packet handling" and "ignore DF (don't fragment) bit" set. Is there something I am missing in where I am looking for the problem?

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • https not working... binding set, certificate installed

    - by rksprst
    I've installed the certificate and set the https bindings. However, when I load the site on https it does not load. I've looked at all the settings but everything seems correct. I've restarted the site numerous times. The certificate is stored on the local computer under personal-certificates... I have the private key for the certificate. The port (443) is open. If I try https://localhost on the server, the site loads... but with a domain error (i.e. it's localhost and not thedomain.com). But https://thedomain.com doesn't load. I really don't know why the https url isn't loading... anyone have any ideas? Thanks!

    Read the article

  • Multiple Users use Script to Access Remote Server via Passwordless SSH

    - by jinanwow
    I am currently setting up a linux box that is tied into Active Directory. This box will allow users to SSH into it with their AD username and password to gather information (Box A). The issue is I am trying to create a function in /etc/bash.bashrc so the users has to do is type "get_info" for example, the function will SSH into a remote machine (Box B) run a command and output the information back to the user. The issue with this is, I have generated a rsa key on Box A, added it to the Box B authorized_keys and it works fine. The issue I am running into is, how do I set this up one time for the current users and any new user who logs into Box A. Is there a better approach than what I am currently doing. Essentially I just need to connect to the remote box, run a command, output the information back to the user and that is it. How can I allow new users to connect via a script to the remote box without having to generate RSA keys for them. The get_info fuction will be supplied a value 'get_info 012345' and returns the results.

    Read the article

  • NullPointerException on jetty 7 startup with jetty deployment descriptor

    - by Draemon
    I'm getting the following error when I start Jetty: 2010-03-01 12:30:19.328:WARN::Failed startup of context WebAppContext@15ddf5@15ddf5/webapp,null,/path/to/jetty-distribution-7.0.1.v20091125/webapps-plus/webapp.war With this commandline: java -jar start.jar OPTIONS=All lib=/path/to/jetty-distribution-7.0.1.v20091125/lib/ext etc/jetty.xml etc/jetty-plus.xml /path/to/webapp/src/configuration/test.xml And test.xml contains: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/webapp</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps-plus/webapp.war</Set> </Configure> If I don't include test.xml on the commandline it works fine.

    Read the article

  • Removing duplicate images (deduplication) - calculating "overlap" of images

    - by jotango
    Hello, I have a ton of product images on our file system. Our code removes 100% identical images (or does not allow them to be uploaded). However our sellers often upload items pictures which are very similar, but not exactly. They could have more whitespace, a worse quality (compression), a different size etc. Is there any way I can calculate the degree of overlap between two images, to flag ones for deletion? Kind of like a Levenshtein distance between two images... Any pointers would be very cool. Thanks!

    Read the article

  • Office365 DirSync Active Directory Integration

    - by dean
    I am preparing to deploy Office365 for my organization. We have an on premise Active Directory Domain Controller (Windows Server 2012 R2). We would like to leverage our Active Directory for: automatic user provisioning in Office365, and password synchronization, using the DirSync tool. Our Active Directory Domain is example.pvt. Email is currently Rackspace Exchange and email addresses follow the form [email protected]. Active Directory User Logon Name follows the form firstinitiallastname. My Questions are: What Active Directory Attribute(s) can be use in provisioning the email address in Office365? Is it possible to use the E-mail field in Active Directory to provision the email address in Office365? Will the fact that our Active Directory Domain has a different extension (.pvt vs. .com) cause a problem with our planned provisioning method?

    Read the article

  • Fixing windows MBR without Vista Recovery CD

    - by Aditya Sehgal
    I had Windows Vista + Ubuntu running on my system. I deleted the ubuntu partitions from Windows. However, when I start the system, GRUB throws up an Error 22 (missing partition) and does not let me boot into Windows. The CD ROM on my laptop is fried and therefore I tried installing Ubuntu again using a USB install. However, the version Ubuntu 9.10 justs hangs in the load screen and does nothing. I do not have windows Vista Recovery CD (as it was a recovery partition in my laptop). What are the options I have? How do I fix this?

    Read the article

  • Migrating users and mailboxes from postfix / Maildir to Postfix with Mysql backend

    - by Chrispy
    So I've got 60 or so users on a hand rolled postfix installation on openbsd and I'd like to move their mailboxes to our new mail server running iRedMail (postfix, vmail/mysql back end) Does anyone know of a good way to do this? Preferably a script I can run to keep syncing the users mailboxes as MX records get updated? I presume one way (though I don't have all their passwords!) would be to have a command line imap client that simulated the users copying their mail themselves but I'm sure there must be a shell / php script to migrate users? Anyone got any bright ideas? Chris.

    Read the article

  • Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    - by O.Shevchenko
    I am running SCEP client to enroll certificates on NDES server. If OpenSSL is not in FIPS mode - everything works fine. In FIPS mode i get the following error: pkcs7_unwrap():pkcs7.c:708] error decrypting inner PKCS#7 139968442623728:error:060A60A3:digital envelope routines:FIPS_CIPHERINIT:disabled for fips:fips_enc.c:142: 139968442623728:error:21072077:PKCS7 routines:PKCS7_decrypt:decrypt error:pk7_smime.c:557: That's because NDES server uses DES algorithm to encrypt returned PKCS#7 packet. I used the following debug code: /* Copy enveloped data from PKCS#7 */ bytes = BIO_read(pkcs7bio, buffer, sizeof(buffer)); BIO_write(outbio, buffer, bytes); p7enc = d2i_PKCS7_bio(outbio, NULL); /* Get encryption PKCS#7 algorithm */ enc_alg=p7enc->d.enveloped->enc_data->algorithm; evp_cipher=EVP_get_cipherbyobj(enc_alg->algorithm); printf("evp_cipher->nid = %d\n", evp_cipher->nid); The last string always prints: evp_cipher-nid = 31 defined in openssl-1.0.1c/include/openssl/objects.h #define SN_des_cbc "DES-CBC" #define LN_des_cbc "des-cbc" #define NID_des_cbc 31 I use 3DES algorithm for PKCS7 requests encryption in my code (pscep.enc_alg = (EVP_CIPHER *)EVP_des_ede3_cbc()) and NDES server accepts these requests, but it always returns answer encrypted with DES. Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >