Search Results

Search found 20890 results on 836 pages for 'self reference'.

Page 462/836 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • How to write a ProxyPass rule to go from HTTPS to HTTP in IIRF

    - by Keith Nicholas
    I have a server which is running a web app that self serves HTTP. I'm wanting to use IIS6 (on the same server) to provide a HTTPS layer to this web app. From what I can tell doing a reverse proxy will allow me to do this. IIRF seems like the tool to do this job. There are no domain names involved.... its all ip numbers. So I think I want :- https:<ipnumber>:5001 to send all its requests to the same server but on a different port and use HTTP ( not exposed to the net ) http:<ipnumber>:5000 but not sure how to go about it with IIRF, I'm not entirely sure how to write the rules? I think I need to make a virtual web app on 5001 using HTTPS? then add a rules file.

    Read the article

  • md5sum of large files gives different results sometimes

    - by Emanuele
    I have an AMD quad core, 8 gb RAM, 1 SSD EXT2 (2 months old), 2 HDD EXT4, approximately 1 year old. I'm using Ubuntu 10.04 x86-64 and when I compute the md5sum of large files (9 GB) sometimes I get different values than the one stored on a reference file. Upon restarting and switching off the PC then I get the expected results no matter how many times I repeat it. But this is random. I've turn on ECC (the fastest possible settings) and the issue seems to be rarer, but I've run memtest86+ for 6+ hours without a glitch (and with ECC off!). Any idea? Should I update the BIOS of my motherboard (Asus EVO-something...don't remember it now)? I've tried all the rest apart this, but genuinely don't know what to do anymore... Any suggestion is appreciated!

    Read the article

  • How to get external 2.5 inch HD enclosure get recognised

    - by fireBand
    I am completely out of ideas. I have 2.5inch HDD from my old laptop, I have tried with 2 different HDD enclosures but my desktop does not recognize it. Not even showing in the disk management or BIOS. The HDD works fine if I use in a 3.5 inch enclosure with external power. I have tried 3 different Y USB cables. I have also got a 7 port self powered 2A USB hub. I have updated my chipset drivers as well. The problem is with my desktop , the enclosures work fine with my new laptop. Anything else that I should try? Thanks in advance.

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Equivalent of phpMyAdmin for MSSQL?

    - by Tedd Hansen
    Is there any webinterface for administrating MSSQL similar to phpMyAdmin (for MySQL)? I want a self-service setup where developers can create a database through webinterface and upload/download backups of the database without local access. I've considered phpMSAdmin, but it hasn't had a release since 2006 so I'm not sure its worth the effort of setting it up. If there is something else (free or not-so-free) that would be great. My question is similar to this one posted 2 years ago, but no good webinterface was found back then. SQL Web Data Administrator seems interesting, but it lacks a few features - most notably creating new databases (also, not updated since 2007).

    Read the article

  • identifying windows service

    - by András
    On my win7 machine when I watch a movie in XBMC, once per hour the desktop regains the focus. The movie keeps playing, but in the background now, so I can only hear it. It is quite irritating. I noticed, that the Windows Application log always contains two entres for this time: "The description for Event ID 0 from source Self-service Plug-in cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer." One entry is for starting, and one for stopping 60 seconds later. This keeps repeating, every two starts 3600 seconds apart. How can I repair it? How can I find out which service it is?

    Read the article

  • SSL certs or intermediate for DMZ

    - by rex
    I've been tasked with deploying and managing load balancers covering internal servers and DMZ servers. I have no experience with this, and this is a first for my organization as well. Balancers are up, running, legit. Currently we are using a self-signed cert for Exchange/OWA. I know that we should have a cert signed by a CA, but the balancer has options for SSL cert or intermediate cert, and I'm unclear on the difference, or on which we need. We will be hosting Lync, Exchange and some custom apps in the DMZ. disclaimer: Apologies up front, I'm desktop support. I recently passed my Net+. It seems that has made me the network engineer in this organization.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • Feedback on available mid-to-enterprise level desktop backup solutions [closed]

    - by user85610
    I am involved in the creation of a new backup solution to replace our current Retrospect setup, which has become a significant time sink to administer. We have almost 200 desktop and some laptop clients, both Windows and OS X. We're only interested in products oriented around disk-to-disk, and would be integrate well with our current set of nine NAS devices as target storage. I'd just like some feedback from anyone out there, as it's sometimes difficult otherwise to find objective reviews of software at this level. Both data and time are important enough that we need a reliable solution which won't be prone to self-destruction as often as Retrospect. Bonus points for de-duplication, which might help squeeze more service time out of our NAS setup in terms of capacity. Currently considering Commvault and Netbackup. Many other products I've seen don't have an OSX client. Any thoughts?

    Read the article

  • How to enable Sqlite on a Mac OS X Mavericks [migrated]

    - by sehummel
    I upgraded my Mac with OS X Mavericks last week. It appears to have taken away support for Sqlite -- not Sqlite3, but sqlite. I need that for a website I work on. I went to Sqlite's website, but all I could find was older versions of Sqlite3. Where can I find a version of Sqlite? I've been through the php.ini and can only find one reference to sqlite. In short, how do I get Sqlite support on my Mac? I have Sqlite3 enabled and the issue isn't going away. I uncommented pdo_sqlite in the php.ini and I still have the issue.

    Read the article

  • How to make PDF pages with U3D images print properly?

    - by David Thornley
    I'm creating some PDF files that have three-dimensional images (U3D) in them. When I bring them up in Acrobat, everything is fine until I try to print the file. If I've viewed a page, it prints fine, showing the U3D image as I last saw it. If I haven't viewed a page yet, the printout is blank. (I can demonstrate this right from the Print dialog, by previewing pages.) The only reference I've seen to printing U3D is in the PDF Standard, which says, basically, that this should work. It recommends "PV" or "PO" for the A key in the 3D activation dictionary, and I'm using libharu which uses "PV".

    Read the article

  • /proc/pid/environ missing variables

    - by Josh Arenberg
    google is giving no love on this one today, so I turn to the experts... I'm currently hacking together a script that relies on the /proc/pid/environ feature in Linux (RHEL 4) to check for a particular environment variable. Trouble is, it seems certain environment variables aren't showing up in there for some reason. Example: create some test vars: $ export T_1=testval TEST_1=testval T=testval TESTING_LONGEST=testval open a subshell: $bash $ cat /proc/self/environ|tr "\0" "\n"|grep testval TESTVARIABLE_LONGEST=testval T=testval hmm... where did T_1 and TEST_1 go?? what rules govern this strange universe? Thanks in advance, Josh

    Read the article

  • Looking for MRP software

    - by Samuel
    I am looking for MRP software. All the stuff I have found so far looks like it's from the 90's and has that very old person business feel. I want something fresh and new. I would prefer an open source, web based solution. PHP/MySQL would be best but not required. I couldn't find anything that was web based that didn't make me want to cry. Superuser doesn't have a lot of eye candy but it still looks great. I am the web developer at my company so if I don't find anything I will be making it my self (well, try to at least).

    Read the article

  • SSL Certificate for local web server

    - by Firefly
    Is it at all possible to create a self-signed certificate for use on multiple machines on a local network which would stop the browser complaining it is not a trusted site? We have a product which is basically a computer running lighttpd to serve a web interface for configuring the computer (sort of how a router has a web interface). There can also be many of these machines running on the same network with dynamic IP's. What I basically want to do is enable SSL for extra security but I don't want people who are on the local network to be given a browser warning about the certificate not being trusted. Is this at all possible?

    Read the article

  • Apache - The name

    - by Joshua Enfield
    I am working on a migration to a newer virtualized server. The old one has Apache 2.2.4 according to the old servers phpinfo(). The new one with the most up to date has 2.2.3. How can this be assuming no trickery is involved? The old one is years old. A lot of the guides I reference use apache2 in folders names and many of the conventions. The newest version of things, as I understand it is called httpd. Did apache change the name from what it originally was? (i.e. break the web server component into its own project called httpd, I realize the original daemon was probably still called httpd)

    Read the article

  • FeedValidator & Feedburner get 404 when accessing wordpress RSS feeds when permalinks are enabled.

    - by Wazbaur
    I'm helping a friend set up a self-hosted Wordpress blog + feedburner and I'm seeing a problem with the feeds that I'm finding somewhat mysterious. Using the default permalink structure (e.g., ?p=123) everything works as expected; I can follow the feed in Google reader, navigate to it manually, and set it up in feedburner. However, once I switch away from the default permalink structure, feedburner and feedvalidator both report that accessing the feed is returning HTTP-404 and Google reader no longer shows new posts (I'm assuming for the same reason), but I can navigate to the feed using a browser. When I do that it appears as though nothing is wrong; there is a feed there and it contains all the posts I expect it to have. I've re-started the feedburner & reader set-up from the beginning after changing the link structure, so I don't think they're doing anything silly like looking at the feed at its old address. I've seen people with similar problems in various other places but there doesn't seem to be a good answer anywhere.

    Read the article

  • How do I create a dynamic formula on Excel?

    - by Mario Marinato -br-
    On Excel, I have a DDE formula on B1 which reads =server|info!someText.data I want to change the formula so that someText is written on A1 and then reference it on the DDE formula. Something like =server|info!A1.data. I have tried to concatenate "A1" directly on the formula, as it is above, with no success. Some other things I tried were =server|info!A1&".data" and =server|info!indirect(A1)&".data", but had no success. Is there a way to achieve this? How?

    Read the article

  • How can browsers in VMs resolve hostnames of websites on parent PC?

    - by elliot100
    I have a number of local websites in development on my Windows PC, set up as virtual hosts within Apache, with hostnames (along the lines of dev.example.com) resolved via the hosts file, so I can test them out them with various browsers. I now want to extend browser testing to running browsers in various OSs in virtual machines, and want to be able to resolve dev.example.com from the VMs. Currently these are a mix of VMWare Server and VirtualPC. I know I can edit the hosts file on any Windows VMs, but this is a bit fiddly and I'd like a solution which is independent of the individual VMs. I think what I need is a nameserver, but what's the simplest way of going about this? I'd like everything to be self-contained on the one machine. I think I can cover firewall and Apache permissioning issues.

    Read the article

  • Unable to create a Windows 7 system image of a failing hard drive

    - by Rahul
    The hard disk of my one year old T400 Thinkpad has started failing periodic hardware tests. I get a "Targeted Read Test Failed" error. The "SMART short self test" times out. I am now trying to create a Windows 7 System image of the hard disk but it fails without giving any specific error messages. I tried using Comodo Backup but got an error (code 101117) there as well. I have copied the important files in Dropbox but would like to take a full System backup as I have plenty of software installed on the machine. Does anyone know why this is happening and how I can take a backup of the system image ?

    Read the article

  • Website is not clickable in Windows XP Internet Explorer 7

    - by c-sharp newbie
    I have installed Windows XP Virtual PC on Windows 7 to test a site that is having issues in IE7 on windows xp - the website loads up but you cannot click on hyperlinks - its like the website has frozen - now is this a browser support issue or OS issue? Can anyone shed any light on this? Is there any browser tools i can use to spot any problems? Sorry if i have been too vague - not much else to say really - completely lost.. Maybe this might help a little; Any guidance is appreciated UPDATE: I think i know what the problem maybe - its the jQuery UI reference that is causing issues with the site. Has anyone else experienced similar problems? jquery library used was jquery-1.8.0.min.js

    Read the article

  • mini-dinstall chmod 0600 changes file: Operation not permitted

    - by V. Reileno
    I'm getting "Operation not permitted" in the mini-dinstall.log everytime a new debian package has been uploaded on the custom debian repository using dput. The deb file is installed successfuly but the changes file remains in the incoming folder. I can not use a post-install script when the changes file can not be processed. How can I fix this problem? Traceback (most recent call last): File "/usr/bin/mini-dinstall", line 780, in install retval = self._install_run_scripts(changefilename, changefile) File "/usr/bin/mini-dinstall", line 826, in _install_run_scripts do_chmod(changefilename, 0600) File "/usr/bin/mini-dinstall", line 193, in do_chmod do_and_log('Changing mode of "%s" to %o' % (name, mode), os.chmod, name, mode) File "/usr/bin/mini-dinstall", line 176, in do_and_log function(*args) OSError: [Errno 1] Operation not permitted: '/srv/debian-repository/mini-dinstall/incoming/debian-repository_1.3_amd64.changes' The mini-dinstall permissions: ls -lad incoming/ drwxrws--- 2 mini-dinstall debian-repository-uploader 4096 Jun 6 11:45 incoming/ ls -la incoming/debian-repository_1.3_amd64.changes -rw-rw---- 1 uploader-user debian-repository-uploader 1322 Jun 6 11:43 incoming/debian-repository_1.3_amd64.changes groups uploader-user uploader-user : uploader-user adm users debian-repository debian-repository-uploader puppet-client-updater groups mini-dinstall mini-dinstall : mini-dinstall debian-repository-uploader Cheers and thanks V.

    Read the article

  • Custom Transport Agent: How do I collect NDRs and all other undeliverables in Exchange 2010 from the Postmaster?

    - by makerofthings7
    I'm trying to collect all NDRs in a single mailbox for all invalid recipients, and anything that fails for any reason. I have a custom transport agent, that I've written myself that appears here: [PS] C:\Windows\system32>Get-TransportAgent Identity Enabled Priority -------- ------- -------- Transport Rule Agent True 1 Text Messaging Routing Agent True 2 Text Messaging Delivery Agent True 3 Routing Rule Agent True 4 **** Sometimes when I run get-messagetrackinglog I get failures like this below RunspaceId : 4ecc61fb-13b9-4506-b680-577222c9bf21 Timestamp : 10/14/2013 12:42:42 PM ClientIp : ClientHostname : Exchange1 ServerIp : ServerHostname : SourceContext : Routing Rule Agent ConnectorId : Source : AGENT EventId : FAIL InternalMessageId : 4416 MessageId : <[email protected]> Recipients : {[email protected]} RecipientStatus : {} TotalBytes : 4542 RecipientCount : 1 RelatedRecipientAddress : Reference : MessageSubject : review CGRC due diligence. Sender : [email protected] ReturnPath : [email protected] MessageInfo : MessageLatency : MessageLatencyType : None EventData : How can I collect the NDRs in a single mailbox for review? I have already set the following command but it is of no effect [PS] C:\>set-TransportConfig -JournalingReportNdrTo [email protected] -ExternalPostmasterAddress [email protected]

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? Update I tried changing the certificate attributes, but to no avail: openssl genrsa -des -passout pass:***** -out private/server.key 2048 openssl req -batch -passin pass:***** -new -x509 -nodes -sha256 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >