Search Results

Search found 10107 results on 405 pages for 'remote backups'.

Page 203/405 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Incoming traffic while on public network

    - by zvikico
    I'm developing a web app and I need to be able to get incoming traffic from 3rd party services I use. This is a classic webhooks situation: I send a request with a return address and receive the response (via HTTP) some time later to the given address. The simple solution would be to provide my external IP address and forward the incoming traffic from the router to my machine. However, I'm working in a large office and I cannot control the router configuration. I'm looking for a different way to achieve that. I do have servers online. I can have a daemon running on one of those servers, which will handle the incoming traffic. I can run a parallel daemon on my machine, which will keep an open connection with the remote daemon (over ssh preferred) and when an inbound traffic is received by the remote, it will send it to the local, which will send it to the correct port on my machine, as if it was received in the natural way. Is there any ready-made solution for that? PS. I'm on OS X and my server is Ubuntu. Thanks, zvikico

    Read the article

  • Gitosis problems

    - by user49884
    I've spent the last 14 days on git and gitosis problems. I did always find a way around my problems but now I'm stuck. To briefly summarize the situation: I have setup gitosis, created a project and I can check in and out of it. Then I added another uses, giving him access to the project by adding him to gitosis.conf, but he can not even clone project. Then I added yet another user for the same project (following same procedure), he has access to everything (clone, pull and push). Finally, I added one more user who can not do anything either. I could live with all of this, because I have access to work on the project. Now I have added a new project, or have I? To my best believe, I have done everything the exact same way as with the first project. I do not get a repository in the repository folder on my server (when doing "git remote add..." and push). I have tried following ALL the guides google gave me on "how to create a new repository gitosis" (is up to page 7 before not ALL hits are marked as visited). I have also tried to follow a different path, starting with "git init --bare" on the server, and then try to clone it. Didn't work either. I get the following error no matter what I try: ERROR: gitosis.serve.main: Repository read access denied fatal: The remote than hung up unexpectedly (But it works fine for accessing gitosis-admin and my first project) Then I read about debugging of gitosis. I have tried with -v, --verbose and adding LogLevel = DEBUG in gitosis.conf, none of these give me extra information. Project setup gitosis.conf: [group project] writable = project members = me LogLevel = DEBUG To my best believe, everything is done the exact same way, as I did when setting up my first project. I'm really stuck, how do I proceed now?

    Read the article

  • Setting up a server that routes local traffic through vpn, while still being able to access internet directly

    - by Kazuo
    The goal is to setup a local server that routes local traffic through an uncontrolled remote vpn service while still being able to access the internet directly (not tunneled via vpn) and provide services through that direct connection. It is supposed to look like this: http://i.stack.imgur.com/74dGC.png Note: There is another router with modem between the local server and the internet. What is the easiest (best?) way to get this network setup working? I'm planning to setup the connection between the local router and the local server with simple ip forwarding. The problem now is that all the server's traffic is routed through the vpn tunnel as soon as I connect the server's openvpn client to the remote service so there is no direct internet connection available. My first idea was to setup a virtual machine (lxc container or something) and run the vpn client and local networking stuff in the vm. So that the vm receives all the incoming traffic from the local router and tunnels it through the vpn. This, as far as I understand, should not affect the physical server's network connection and should allow it to provide services to the internet. Before I start trying to set this up (I don't have much experience in networking), is there any easier or better way to do this? I would be thankful for every suggestion. Edit: Let's say the interface connected to the internet is eth0 and the interface connected to the local router is eth1. Another idea would be to create a virtual interface eth0:0 and specifiy it as openvpn's local endpoint and then force any traffic coming from eth1 through eth0:0. I'm not sure how I would force the traffic through eth0:0, though (possibly by adding routes).

    Read the article

  • Can't login to Windows server 2008 (as any user, not even locally, not in safe mode but I have right credentials)

    - by Saix
    Just from nowhere I can't login to my Windows server 2008 machine. All the services like FTP server or webserver (which I'm actually not using, just remote desktop and FTP) are running. Whatever credentials I try (even/especialy administrator), it always says Unknown Username or bad password. I have already tried hard turn off/on and safe mode without luck. Also I already tried type in login name as SERVER NAME\user or Workgroup\user (every case sensitive scenario), still says I have wrong login. Usually we are using remote desktop to access the machine but local access over KVM doesn't work either. Now I'm lock out of any control or any way to do something. There's just logon screen preceding by ctrl+alt+del to login alert. Without me able to login I can't actually try to fix anything. Can't find much more on Internet except the SERVER NAME\user thing. Reinstall would be the last resort but I can't let things this way for much longer anyway. This server is vital. If it would be any help, I think automatic Windows updates are turned off and there were no updates or newly installed software for last couple years and just few soft restarts, non of them recently. It happened during it's runtime while all other services were still up and running, so this couldn't be just some Windows nasty screw up during boot or something. What could have possibly changed? What are my options now?

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Maven does not resolve a local Grails plug-in

    - by Drew
    My goal is to take a Grails web application and build it into a Web ARchive (WAR file) using Maven, and the key is that it must populate the "plugins" folder without live access to the internet. An "out of the box" Grails webapp will already have the plugins folder populated with JAR files, but the maven build script should take care of populating it, just like it does for any traditional WAR projects (such as WEB-INF/lib/ if it's empty) This is an error when executing mvn grails:run-app with Grails 1.1 using Maven 2.0.10 and org.grails:grails-maven-plugin:1.0. (This "hibernate-1.1" plugin is needed to do GORM.) [INFO] [grails:run-app] Running pre-compiled script Environment set to development Plugin [hibernate-1.1] not installed, resolving.. Reading remote plugin list ... Error reading remote plugin list [svn.codehaus.org], building locally... Unable to list plugins, please check you have a valid internet connection: svn.codehaus.org Reading remote plugin list ... Error reading remote plugin list [plugins.grails.org], building locally... Unable to list plugins, please check you have a valid internet connection: plugins.grails.org Plugin 'hibernate' was not found in repository. If it is not stored in a configured repository you will need to install it manually. Type 'grails list-plugins' to find out what plugins are available. The build machine does not have access to the internet and must use an internal/enterprise repository, so this error is just saying that maven can't find the required artifact anywhere. That dependency is already included with the stock Grails software that's installed locally, so I just need to figure out how to get my POM file to unpackage that ZIP file into my webapp's "plugins" folder. I've tried installing the plugin manually to my local repository and making it an explicit dependency in POM.xml, but it's still not being recognized. Maybe you can't pull down grails plugins like you would a standard maven reference? mvn install:install-file -DgroupId=org.grails -DartifactId=grails-hibernate -Dversion=1.1 -Dpackaging=zip -Dfile=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip I can manually setup the Grails webapp from the command-line, which creates that local ./plugins folder properly. This is a step in the right direction, so maybe the question is: how can I incorporate this goal into my POM? mvn grails:install-plugin -DpluginUrl=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip Here is a copy of my POM.xml file, which was generated using an archetype. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.samples</groupId> <artifactId>sample-grails</artifactId> <packaging>war</packaging> <name>Sample Grails webapp</name> <properties> <sourceComplianceLevel>1.5</sourceComplianceLevel> </properties> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.grails</groupId> <artifactId>grails-crud</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>org.grails</groupId> <artifactId>grails-gorm</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>opensymphony</groupId> <artifactId>oscache</artifactId> <version>2.4</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.7</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.6</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- <dependency> <groupId>org.grails</groupId> <artifactId>grails-hibernate</artifactId> <version>1.1</version> <type>zip</type> </dependency> --> </dependencies> <build> <pluginManagement /> <plugins> <plugin> <groupId>org.grails</groupId> <artifactId>grails-maven-plugin</artifactId> <version>1.0</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>init</goal> <goal>maven-clean</goal> <goal>validate</goal> <goal>config-directories</goal> <goal>maven-compile</goal> <goal>maven-test</goal> <goal>maven-war</goal> <goal>maven-functional-test</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${sourceComplianceLevel}</source> <target>${sourceComplianceLevel}</target> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • s3fs Input/output error

    - by shadow_of__soul
    i'm trying to set up a backup system with s3fs and the amazon s3 service. i followed this 2 guides: http://qugstart.com/blog/linux/how-to-mount-an-amazon-s3-bucket-as-virtual-drive-on-centos-5-2/ http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ anyway making a tail to the /var/log/messages i get: Aug 28 13:37:46 server s3fs:###response=403 i already tried creating the authentication file on /etc/passwd-s3fs and setting there the access and private key, passing it trough the command line, i checked several times the credentials and i used it with s3fox, and is working. i also have set the time of the machine (with the date command) to be the same as the amazon S3 servers (i got the time of the S3 server uploading a file with the file manager) not only rsync don't work, commands like ls or cp in the /mnt/s3 didn't work also. any help of how i can solve/debug this? Regards, Shadow.

    Read the article

  • Weird FTP issue between Unity Express and Windows Server 2008 FTP

    - by user33975
    My VOIP specialist complained about not being able to run backups of the Unity Express onto our FTP server (Microsoft FTP on Server 2008). I did a packet trace and observed some weird behavior that I think is even kind of funny in a way. The Unity FTP client is able to initiate both control and data connections with no problem, even being able to LIST directories and CWD into them. But as soon as the client sends a SYST command (not sure why it cares), the server replies with "Windows_NT" and lo and behold...the client immediately sends a QUIT command! I've seen this happen consistently on my packet captures. I tried pointing the Unity FTP client to a FileZilla FTP server, and viola...it worked fine! Has anyone else observed this? I thought it was kinda funny, being that Cisco seems to not like Microsoft that much...

    Read the article

  • Windows Cannot Boot - 0xc0000225

    - by B__
    I've had Windows 7 installed for months on my Thinkpad T60 laptop and today out of the blue when I tried to boot, it started the Windows loading screen and immediately I got this error: Status: 0xc0000225 Info: The boot selection failed because a required device is inaccessible. Through some research, I see people get this problem when a repartitioning goes wrong or there's a problem with their dual boot. I'm not dual booting my machine and I haven't messed with my partitions since I installed the OS. This error is truly out of the blue. I've run memory diagnostics from a Windows boot disc and hard drive diagnostics from my BIOS and neither found a problem. I don't have any backups to restore from so I'm hoping to find a fix for this. Anyone seen this kind of thing before? Thanks

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • HP D2D 4312 Bacula configuration

    - by krisdigitx
    I have configured 5 libraries on the HP D2D system Discovery on the Bacula server shows only the last library and not all libraries. Why? [root@server bacula]# iscsiadm --mode discovery --type sendtargets --portal 10.66.59.114 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dca5e.library12.drive1 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dcaf2.library12.robotics I can query it fine using... [root@server bacula]# mtx -f /dev/sg2 inquiry Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'Ultrium 5-SCSI ' Revision: 'ED51' Attached Changer API: No [root@bray bacula]# mtx -f /dev/sg3 inquiry Product Type: Medium Changer Vendor ID: 'HP ' Product ID: 'MSL G3 Series ' Revision: 'EL41' Attached Changer API: No [root@server bacula]# mtx -f /dev/sg3 status Storage Changer /dev/sg3:1 Drives, 97 Slots ( 1 Import/Export ) Data Transfer Element 0:Empty Storage Element 1:Full :VolumeTag=50507F82 Storage Element 2:Full :VolumeTag=50507F83 Storage Element 3:Full :VolumeTag=50507F84 Storage Element 4:Full :VolumeTag=50507F85 Storage Element 5:Full :VolumeTag=50507F86 Storage Element 6:Full :VolumeTag=50507F87 Storage Element 7:Full :VolumeTag=50507F88 Does anyone have any good documentation for implementing Bacula with an HP D2D tape drive for server backups, and how to allocate libraries?

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • SQL Server 2008 Express - "Best" backup solution?

    - by Alexander Nyquist
    Hi! What backup solutions would you recommend when using SQL Server 2008 express? I'm pretty new to SQL Server, but as I'm coming from an MySql background i thought of setting up replication on another computer and just take x-copy backups of that server. But unfortanetly replication is not available in the express edition. The site is heavily accessed, so there has to be no delays och downtime. I'm also thinking of doing a backup twice a day or something. What would you recommend? I have multiple computers I can use, but don't know if that helps me since i'm using the express version. Thanks

    Read the article

  • HP ML350G6 running hyper-V 2008 r2 resets itself every 2 hours

    - by GT
    The system started resetting itself exactly every 2 hours. These are the messages in the iLO2 log: Informational iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server power restored. Caution iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server reset. It's not an ASR reset (that would show in the log) Redundant power supplies, swapped but no change. Turned off all virtual machines (i.e. now only running hypervisor) but not OK Boot HP smartstart diagnostics disk, ALL OK Diagnostic disk reports no errors Went back to booting Hypervisor and the problem is back. Seems the hyper-V system disk has got a time based program (virus) causing the reset. I thought the hypervisor had a small attack surface and should be OK. All virtual machines (SBS2008, Win7 and Win XP) and network computers are protected with TrendMicro WFBS. I am about to rebuild the disk (I have backups) but wondered if there were any suggestions to try first???

    Read the article

  • HP ML350G6 running hyper-V 2008 r2 resets itself every 2 hours

    - by GT
    The system started resetting itself exactly every 2 hours. These are the messages in the iLO2 log: Informational iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server power restored. Caution iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server reset. It's not an ASR reset (that would show in the log) Redundant power supplies, swapped but no change. Turned off all virtual machines (i.e. now only running hypervisor) but not OK Boot HP smartstart diagnostics disk, ALL OK Diagnostic disk reports no errors Went back to booting Hypervisor and the problem is back. Seems the hyper-V system disk has got a time based program (virus) causing the reset. I thought the hypervisor had a small attack surface and should be OK. All virtual machines (SBS2008, Win7 and Win XP) and network computers are protected with TrendMicro WFBS. I am about to rebuild the disk (I have backups) but wondered if there were any suggestions to try first???

    Read the article

  • Crontab script on Mac OS X Lion does not work anymore

    - by Nopster
    I have a problem with cron tasks. Previously this script worked fine on Mac OS X 10.6 server, but when I initialize it on Lion (client), this script stopped working. Basically, this .bat file calls a jar file (that invokes a loop of mysqldump commands) to backup several databases on several servers, and runs perfectly if launched by the shell. cd /Users/nameoftheuser/Desktop/backupper /usr/bin/java -cp .:Backupper.jar:lib/mail.jar backupper.Main "/Users/nameoftheuser/Desktop/backupper/listasiti.txt" "/Users/nameofthe/Desktop/backupper/config.properties But if the cron launches the same .bat file, the generated database backups are 0 bytes. The cron entry is: 0 0 sh /Users/path/to/file.bat I believe that the problem is that cron doesn't run as root. Or what else?

    Read the article

  • RAR Password recovery / hash extraction on Mac OS X

    - by Josh K
    I'm running Mac OS X 10.6.2 and have been handed a couple of old files that need to be extracted. Old backups or finances or bills I believe. They are RAR files, and password protected. Is there a way to extract the hash from these files so I can feed it into John The Ripper or Cain and Abel? Edit I have downloaded cRARk, but unfortunately nothing I have (SimplyRAR, RAR Expander, The Unarchiver) will extract it without a password. Can someone verify that I'm crazy and there is no password on the Mac version?

    Read the article

  • DVD-RW Drive not recognizing a DVD+R disc

    - by unknown (yahoo)
    Hi everyone, I am having trouble burning DVD+R disc's. My OS is vista and i have used this burner and these same discs in the past. I haven't had the need to do so in months and now that i come back to create some backups my DVD/RW drive doesn't recognize a brand new DVD+R disc. These disc are the same ones i have used in the past(Same Pack even). Anyone have any idea what this might be. Maybe a vista upgrade or something that i downloaded in the last few months that could have thrown something off? Thanks in advance.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >