Search Results

Search found 7249 results on 290 pages for 'https everywhere'.

Page 168/290 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • seeking to upgrade my bash magic. help decipher this command: bash -s stable

    - by tim
    ok so i'm working through a tutorial to get rvm installed on my mac. the bash command to get rvm via curl is curl -L https://get.rvm.io | bash -s stable i understand the first half's curl command at location rvm.io, and that the result is piped to the subsequent bash command, but i'm not sure what that command is doing. My questions: -s : im always confused about how to refer to these. what type of thing is this: a command line argument? a switch? something else? -s : what is it doing? i have googled for about half an hour but not sure how to refer to it makes it difficult. stable : what is this? tl;dr : help me decipher the command bash -s stable to those answering this post, i aspire to one day be as bash literate as you. until then, opstards such as myself thank you for the help!

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Unknown protocol when trying to connect to remote host with stunnel

    - by RaYell
    I'm trying to set up a stunnel for WebDav on Windows. I want to connect 80 port on my local interface to 443 on another machine in my network. I can ping the machine remote machine. However when I use the tunnel, I'm getting this error all the time SSL state (accept): before/accept initialization SSL_accept: 140760FC: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol There is nothing in the logs on the other machine and here's my stunnel connection config [https] accept = 127.0.0.2:80 connect = 10.0.0.60:443 verify = 0 I've set it up to accept all certificates so this shouldn't be a problem with a self-signed certificate remote host uses. Does anyone knows what might be the problem that this connection cannot be eastablished?

    Read the article

  • Virtual machine on ubuntu

    - by MITHIYA MOIZ
    I have configured virtual machine on ubuntu with the help of below article, https://help.ubuntu.com/9.04/serverguide/C/libvirt.html I managed to finish all the part except the major portion getting virtual host to talk to real network, Which I guess should be done only via bridge interface. Via virtual machine manager I try to choose any interface it gives me interface not bridged When I try to bridge the interceface eth0 as below auto br0 iface br0 inet static address 192.168.0.223 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off I cannot communicate with this interface to network, host server looses all the communication to network. But when I remote bridge interface from /etc/network/interfaces And configure eth0 as below it works fine The primary network interface auto eth0 iface eth0 inet static address 192.168.0.223 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 dns-nameservers 62.215.6.51 gateway 192.168.0.1 how can i setup bridge interface correctly and how would my /etc/netwrok/interfaces file would look a like.

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • How to do IIS SSL server redirects correctly? Is meta refresh needed?

    - by Jesse
    Hi all! I think our backend programmer/server admin is handling our SSL redirects pretty wonky - see it in action here: www.mchenry.edu/parentorientation First off, see how it redirects to index2.asp? Is this necessary? Can't she easily redirect to the original index.asp but have it be https:// instead? Also, she is using a meta refresh on the original index.asp page to redirect to index2.asp as well, and she says this is for backup, in case the server configs change and the server can't handle the redirect so then the webpage would take over. Finally, she said she tried using the server redirect solely but that it kept looping on itself- what did she do wrong? Is this even possible? Is she giving us a snow job or what? I want a better understanding of what is happening here so I can talk to my boss about it, because this is driving me up the wall. Thanks for any info you can provide.

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Good Shibboleth tutorials our there?

    - by fgysin
    I am looking into using Shibboleth for authentication of webapplications at my organisation. I am very new to this subject and would like to read through some good tutorials, hands-on-lessons or whatever is out there to help newbies getting to know Shibboleth. But so far I have not been able to find any tutorials that contain specific examples for each steps. I would like to get a running setup up somehow so I will be able to play around with it... What I have found up to now: Official Documentation for Shibboleth 2 -- https://spaces.internet2.edu/display/SHIB2/Installation I would appreciate any hints you can give me about additional information to Shibboleth.

    Read the article

  • Cygwin/Git Bizarre Terminal Issue

    - by emptyset
    Alright, this is weird. First off, this is mintty running on up-to-date cygwin, with git pulled from cygwin's setup.exe. I am running zsh. $ git clone https://<user>@<domain>/<repository>/ ~/src/project/dev Initialized empty Git repository in /cygdrive/c/src/project/dev/.git/ Password: <actual password in plain text appears> # Nothing happens... ^C $ <password text that I just typed> zsh: command not found: <same password text> What is going on here? Is this a terminal problem, a shell problem, a git problem, or a cygwin problem? Update: Yes, I'm running the Cygwin git version, not the Windows version: $ which git /usr/bin/git $ git --version git version 1.7.1 $ /cygdrive/c/Program\ Files\ \(x86\)/Git/bin/git.exe --version git version 1.7.0.2.msysgit.0

    Read the article

  • Redirect an Apache2 SSL VirtualHost with mod_alias

    - by Jeff
    I want to make sure there aren't any odd behaviors that I don't know about when redirecting a SSL VirtualHost with mod_alias Redirect as outlined by Apache here. My code seems to work, but since SSL virtual hosts are restricted to just one IP address, I want to make sure there aren't any problems eluding me. Explicitly not using TLS. I'm stuck with Apache 2.2 for now. <VirtualHost *:443> ServerName example.com SSLEngine On Redirect 301 / https://www.example.com/ </VirtualHost> <VirtualHost *:443> ServerName www.example.com SSLEngine On # Do stuff # </VirtualHost> So I guess my question is, should SSL VirtualHost redirection with mod_alias Redirect work the same as non-SSL redirection?

    Read the article

  • Debian 6 Internet connection sharing aka IP masquerade not working

    - by Rautamiekka
    The problem: the computers [Xbox 360 and a Kubuntu 12.04.1 laptop] can't access Internet through a recently-installed desktopless Debian 6 laptop (which is wirelessly connected to a WLAN station) but addresses are successfully given by dnsmasq. The attempts: 1.1) /etc/dnsmaq.conf conffed according to http://wiki.debian.org/HowTo/dnsmasq: add lines interface=eth0 dhcp-range=192.168.0.50,192.168.0.150,255.255.255.0,12h 1.2) Follow http://www.cyberciti.biz/faq/rhel-fedora-linux-internet-connection-sharing-howto/ and use their script to setup iptables. 2) Follow the Ubuntu Internet Gateway Method (iptables) at https://help.ubuntu.com/community/Internet/ConnectionSharing recommended and which worked at Share internet in Linux. The Debian laptop was rebooted many times and between each attempt, with and without the script auto-executing via /etc/rc.local. While adding the iptables-restore command to that file I disabled the script.

    Read the article

  • Installing List Compenent on Sharepoint Server

    - by Tom
    I added the Sharepoint site to the 'Document Management' section in CRM with the List Components checked and it added it with no problem. Also when I navigate to the 'Documents' section under an account it shows up with the format of the List components. However, if i click on 'New' or 'Actions' I get the following error message: An Error has occured in the script on this page. Error: Access is denied URL: https://*serveraddress*/crmgrid/scripts/crmmenu.htc Do you want to continue running scripts on this page? I have ran the power script which added the MIME .htc extention to IIS. Does anyone know what might be wrong?

    Read the article

  • Set up proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP, OPENVPN, and PPTP. I want to set up a proxy on the server, so all connection that comes from vpn clients, they will use that. I created the following bash script file for it, but the proxy isn't working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 What to do to make a global proxy for server and all vpn clients to use it automatically?

    Read the article

  • Symbolic link to text editor (Sublime) on Mac

    - by Michael
    I'm following along with this tutorial on how to use Sublime text editor https://tutsplus.com/lesson/services-and-opening-sublime-from-the-terminal/ . It gives instructions to enter the following command to enable opening of Sublime in the terminal. ln -s "/Applications/Sublime Text 2.app/Contents/SharedSupport/bin/subl" /bin/subl After creating that link, it says I should be able to do subl . to open all the files in a folder in Sublime. However, when I do it, it says -bash: subl: command not found My system says the file exists ln: /bin/subl: File exists Any idea what I can do?

    Read the article

  • Can I optimize this mod_wsgi / apache file better?

    - by tomwolber
    Hi! I am new to Django/Python/ mod_wsgi, and I was wondering if I could optimize this file to reduce memory usage: ServerRoot "/home/<foo>/webapps/django_wsgi/apache2" LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule wsgi_module modules/mod_wsgi.so LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog /home/<foo>/logs/user/access_django_wsgi.log combined ErrorLog /home/<foo>/logs/user/error_django_wsgi.log KeepAlive Off Listen 12345 MaxSpareThreads 3 MinSpareThreads 1 MaxClients 5 MaxRequestsPerChild 300 ServerLimit 4 HostnameLookups Off SetEnvIf X-Forwarded-SSL on HTTPS=1 ThreadsPerChild 5 WSGIDaemonProcess django_wsgi processes=5 python-path=/home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 threads=1 WSGIPythonPath /home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 WSGIScriptAlias /auctions /home/<foo>/webapps/django_wsgi/auctions.wsgi WSGIScriptAlias /achievers /home/<foo>/webapps/django_wsgi/achievers.wsgi

    Read the article

  • maven scm plugin deleting output folder in every execution

    - by Udo Fholl
    Hi all, I need to download from 2 different svn locations to the same output directory. So i configured 2 different executions. But every time it executes a checkout deletes the output directory so it also deletes the already downloaded projects. Here is a sample of my pom.xml: <profiles> <profile> <id>checkout</id> <activation> <property> <name>checkout</name> <value>true</value> </property> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-scm-plugin</artifactId> <version>1.3</version> <configuration> <username>${svn.username}</username> <password>${svn.pass}</password> <checkoutDirectory>${path}</checkoutDirectory> <skipCheckoutIfExists /> </configuration> <executions> <execution> <id>checkout_a</id> <configuration> <connectionUrl>scm:svn:https://host_n/folder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> <execution> <id>checkout_b</id> <configuration> <connectionUrl>scm:svn:https://host_l/anotherfolder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> Is there any way to prevent the executions to delete the folder ${path} ? Thank you. PS: I cant format the pom.xml fragment correctly, sorry!

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

  • Cannot validate xml doc againest a xsd schema (Cannot find the declaration of element 'replyMessage

    - by Daziplqa
    Hi Guyz, I am using the following code to validate an an XML file against a XSD schema package com.forat.xsd; import java.io.IOException; import java.net.URL; import javax.xml.XMLConstants; import javax.xml.transform.Source; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.Schema; import javax.xml.validation.SchemaFactory; import javax.xml.validation.Validator; import org.xml.sax.ErrorHandler; import org.xml.sax.SAXException; import org.xml.sax.SAXParseException; public class XSDValidate { public void validate(String xmlFile, String xsd_url) { try { SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = factory.newSchema(new URL(xsd_url)); Validator validator = schema.newValidator(); ValidationHandler handler = new ValidationHandler(); validator.setErrorHandler(handler); validator.validate(getSource(xmlFile)); if (handler.errorsFound == true) { System.err.println("Validation Error : "+ handler.exception.getMessage()); }else { System.out.println("DONE"); } } catch (SAXException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } private Source getSource(String resource) { return new StreamSource(XSDValidate.class.getClassLoader().getResourceAsStream(resource)); } private class ValidationHandler implements ErrorHandler { private boolean errorsFound = false; private SAXParseException exception; public void error(SAXParseException exception) throws SAXException { this.errorsFound = true; this.exception = exception; } public void fatalError(SAXParseException exception) throws SAXException { this.errorsFound = true; this.exception = exception; } public void warning(SAXParseException exception) throws SAXException { } } /* * Test */ public static void main(String[] args) { new XSDValidate().validate("com/forat/xsd/reply.xml", "https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor/CyberSourceTransaction_1.53.xsd"); // return error } } As appears, It is a standard code that try to validate the following XML file: <?xml version="1.0" encoding="UTF-8"?> <replyMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <merchantReferenceCode>XXXXXXXXXXXXX</merchantReferenceCode> <requestID>XXXXXXXXXXXXX</requestID> <decision>XXXXXXXXXXXXX</decision> <reasonCode>XXXXXXXXXXXXX</reasonCode> <requestToken>XXXXXXXXXXXXX </requestToken> <purchaseTotals> <currency>XXXXXXXXXXXXX</currency> </purchaseTotals> <ccAuthReply> <reasonCode>XXXXXXXXXXXXX</reasonCode> <amount>XXXXXXXXXXXXX</amount> <authorizationCode>XXXXXXXXXXXXX</authorizationCode> <avsCode>XXXXXXXXXXXXX</avsCode> <avsCodeRaw>XXXXXXXXXXXXX</avsCodeRaw> <authorizedDateTime>XXXXXXXXXXXXX</authorizedDateTime> <processorResponse>0XXXXXXXXXXXXX</processorResponse> <authRecord>XXXXXXXXXXXXX </authRecord> </ccAuthReply> </replyMessage> Against the following XSD : https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor/CyberSourceTransaction_1.53.xsd The error is : Validation Error : cvc-elt.1: Cannot find the declaration of element 'replyMessage'. Could you please help me!

    Read the article

  • Installing gitlab on Debian 6.0.5

    - by helmus
    I am using following directions in an attempt to install gitlab on Debian 6.0.5 https://github.com/gitlabhq/gitlabhq/blob/stable/doc/installation.md I am getting an error when i'm running following command sudo -u gitlab bundle exec rake gitlab:app:setup RAILS_ENV=production WARNING: #<ArgumentError: Illformed requirement ["#<Syck::DefaultKey:0x00000004b52198> 1.1.4"]> # -*- encoding: utf-8 -*- Gem::Specification.new do |s| s.name = %q{carrierwave} s.version = "0.6.2" s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version= s.authors = ["Jonas Nicklas"] ....more error.... s.add_dependency(%q<mini_magick>, [">= 0"]) s.add_dependency(%q<rmagick>, [">= 0"]) end end WARNING: Invalid .gemspec format in '/usr/local/lib/ruby/gems/1.9.1/specifications/carrierwave-0.6.2.gemspec' Could not locate Gemfile Some pointers to what could cause this would be much appreciated, i have only little experience with RoR and it seems to be related to that.

    Read the article

  • 1000 HZ linux kernel necessary if I have tickless and high resolution timer?

    - by Bob
    I am trying to improve performance on my server. I have a few processes that need low jitter (less than 10ms variance). I have a load average of 4 maximum on an i7-920 (4 physical cores, 8 with HT). There are about 10 processes ranging from 40% to 90% of a core user mode. System usage is 3% total. Total CPU usage is 80% max. Will setting the kernel from 100hz to 1000hz improve the jitter if tickless and high resolution timers are already set? This page seems to indicate it still does something. https://lkml.org/lkml/2009/4/28/401 How about changing from voluntary (PREEMPT_VOLUNTARY) to preemptible (PREEMPT)?

    Read the article

  • Monitor reset itself and now I can't set the resolution/settings back to how it was before

    - by verve
    I've had my LG 24" widescreen monitor since 2009 and 2 weeks ago I noticed the monitor turned itself off (never had it done this before) so I switched it back on to find all the settings like gamma, resolution etc. different = looked like it had been reset. Everyone in the house swears they never unplugged and plugged it back in. When I opened a webpage the fonts and zoom on the pages were different and my desktop was strange too; fonts of the icons were different etc. The screen seems blurry and when I watch movies the faces look distorted so I thought I would try to first figure out the resolution it used to be but when I go under "Adjust screen resolution" none of the options work and there is no recommended resolution marked; all the options stretch out the screen and looks terrible so right now I have it set to the least distorted one. Then since the resolution wasn't working I set the other manual settings(done by physical buttons on the monitor) back to how it used to be (luckily, I had written these down). The monitor looks better but the resolution makes it a strain to use. I thought maybe some Windows update caused this crap so I tried to System Restore: didn't work. What went wrong? A few questions: 1) What was the likely cause of the monitor shutting down itself and screwing up the settings I have been using since the day I bought the monitor? 2) Why have the fonts changed everywhere unless this is a HDD/video card problem? 3) How do I find the perfect resolution it used to be? The monitor wants me to set it to 1920 x 1080 but that isn't one of the options although I don't remember what resolution I used before. I use the 16:9 setting while I try the available resolution options but nothing looks good! How do I find what it used to be? Manual available in PDF under Support: http://www.lg.com/ca_en/computer-products/monitors/LG-lcd-monitor-W2442PA-BF.jsp Win 7. IE 9.

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • What tangible security are gained by blocking all but a few outgoing ports in a firewall

    - by Frankie Dintino
    Our current hardware firewall allows for blocking incoming and outgoing ports. We have two possibilities: Block certain troublesome ports (unsecured smtp, bittorrent, etc.) Block all but a few approved ports (http, https, ssh, imap-ssl, etc.) I see several downsides with option 2. Occasionally web servers are hosted on non-standard ports and we would have to deal with the resulting issues. Also, there is nothing preventing a malicious or unwanted service from being hosted on port 80, for instance. What are are the upsides?

    Read the article

  • Chrome - SSL Security issue on Windows platforms?

    - by al nik
    Fortify.net is a service that displays what's the currently encryption key used by your browser in a https connection. If I browse this site with Chrome 4.1.249.1042 in WinXp SP3 the key used is RC4 cipher, 128-bit key This encryption is weak, and it's the one used by old browsers like IE6. Chrome works fine on Fedora9 and it uses AES cipher, 256-bit key as more modern browsers do (i.e.Firefox) I consider this a security issue. I'm considering to switch back to Firefox in Windows. Do you know if it's possible to change the default encryption key in Chrome?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >