Search Results

Search found 10359 results on 415 pages for 'apache bench'.

Page 290/415 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Curl_errno=55, "Failed sending network data."

    - by 4dplane
    Hi all: I have a php script that updates a database via an api. This script works on one server but not on another. Both servers have curl enabled and they have php 5.2.6 or above. The error happens in the do_put() method. The rest of the script seems to be fine. I have found that: curl_errno= 55 = "Failed sending network data". curl_error= select/poll returned error <?php //phpinfo(); require_once "class.DelveAuthUtil.php"; $access_key = ""; $secret = "W"; $org_id = ""; $media_id = ""; $new_tag_name = "uvideo"; $assign_tag_to_media_url = "http://api..com/rest/organizations/$org_id/media/$media_id/properties/tags/$new_tag_name"; $signed_create_new_tag_url = DAU::authenticate_request("PUT", $assign_tag_to_media_url, $access_key, $secret); # perform the creation of the new custom property $put_response = do_put($signed_create_new_tag_url); # Execute the POST function do_post($url, $params=array()) { # Combine parameters $param_string = ""; foreach ($params as $key => $value) { $value = urlencode($value); $param_string = $param_string . "&$key=$value"; } // Get the curl session object $session = curl_init($url); // Set the POST options. curl_setopt ($session, CURLOPT_POST, true); curl_setopt ($session, CURLOPT_POSTFIELDS, $param_string); curl_setopt($session, CURLOPT_HEADER, false); curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do the POST and then close the session $response = curl_exec($session); curl_close($session); return $response; } # Execute the PUT function do_put($url) { // Get the curl session object $session = curl_init($url); print(" <br> session= " . $session . "<br>"); // Set the PUT options. print ("opt0= " . curl_setopt($session, CURLOPT_VERBOSE, TRUE) . "<br>"); print ("opt1= " . curl_setopt ($session, CURLOPT_PUT, true) . "<br>"); print ("opt2= " . curl_setopt ($session, CURLOPT_HEADER, false) . "<br>"); print ("opt3= " . curl_setopt ($session, CURLOPT_RETURNTRANSFER, true) . "<br>"); // Do the PUT and then close the session $response = curl_exec($session); if (curl_errno($session)) { print ( "curl_errno= " . curl_errno($session). "<br>"); print( "curl_error= " . curl_error($session) . "<br>"); } else { curl_close($session); } } I have small lead in the Apache log - there is an SSL issue that I do not know how to resolve. [Thu Mar 25 15:57:58 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:57:58 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Mar 25 15:58:59 2010] [notice] caught SIGTERM, shutting down [Thu Mar 25 15:58:59 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:58:59 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:58:59 2010] [notice] Digest: generating secret for digest authentication ... [Thu Mar 25 15:58:59 2010] [notice] Digest: done [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my1.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:59:00 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:59:00 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations[/PHP] Any help would be great! Thanks, 4dplane

    Read the article

  • testing dao with hibernate genericdao pattern with spring.Headache

    - by black sensei
    Hello good fellas! in my journey of learning hibernate i came across an article on hibernate site. i' learning spring too and wanted to do certain things to discover the flexibility of spring by letting you implement you own session.yes i don't want to use the hibernateTemplate(for experiment). and i'm now having a problem and even the test class.I followed the article on the hibernate site especially the section an "implementation with hibernate" so we have the generic dao interface : public interface GenericDAO<T, ID extends Serializable> { T findById(ID id, boolean lock); List<T> findAll(); List<T> findByExample(T exampleInstance); T makePersistent(T entity); void makeTransient(T entity); } it's implementation in an abstract class that is the same as the one on the web site.Please refer to it from the link i provide.i'll like to save this post to be too long now come my dao's messagedao interface package com.project.core.dao; import com.project.core.model.MessageDetails; import java.util.List; public interface MessageDAO extends GenericDAO<MessageDetails, Long>{ //Message class is on of my pojo public List<Message> GetAllByStatus(String status); } its implementation is messagedaoimpl: public class MessageDAOImpl extends GenericDAOImpl <Message, Long> implements MessageDAO { // mySContainer is an interface which my HibernateUtils implement mySContainer sessionManager; /** * */ public MessageDAOImpl(){} /** * * @param sessionManager */ public MessageDAOImpl(HibernateUtils sessionManager){ this.sessionManager = sessionManager; } //........ plus other methods } here is my HibernatUtils public class HibernateUtils implements SessionContainer { private final SessionFactory sessionFactory; private Session session; public HibernateUtils() { this.sessionFactory = new AnnotationConfiguration().configure().buildSessionFactory(); } public HibernateUtils(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } /** * * this is the function that return a session.So i'm free to implements any type of session in here. */ public Session requestSession() { // if (session != null || session.isOpen()) { // return session; // } else { session = sessionFactory.openSession(); // } return session; } } So in my understanding while using spring(will provide the conf), i'ld wire sessionFactory to my HiberbernateUtils and then wire its method RequestSession to the Session Property of the GenericDAOImpl (the one from the link provided). here is my spring config core.xml <bean id="sessionManager" class="com.project.core.dao.hibernate.HibernateUtils"> <constructor-arg ref="sessionFactory" /> </bean> <bean id="messageDao" class="com.project.core.dao.hibernate.MessageDAOImpl"> <constructor-arg ref="sessionManager"/> </bean> <bean id="genericDAOimpl" class="com.project.core.dao.GenericDAO"> <property name="session" ref="mySession"/> </bean> <bean id="mySession" factory-bean="com.project.core.dao.SessionContainer" factory-method="requestSession"/> now my test is this public class MessageDetailsDAOImplTest extends AbstractDependencyInjectionSpringContextTests{ HibernateUtils sessionManager = (HibernateUtils) applicationContext.getBean("sessionManager"); MessageDAO messagedao =(MessageDAO) applicationContext.getBean("messageDao"); static Message[] message = new Message[] { new Message("text",1,"test for dummies 1","1234567890","Pending",new Date()), new Message("text",2,"test for dummies 2","334455669990","Delivered",new Date()) }; public MessageDAOImplTest() { } @Override protected String[] getConfigLocations(){ return new String[]{"file:src/main/resources/core.xml"}; } @Test public void testMakePersistent() { System.out.println("MakePersistent"); messagedao.makePersistent(message[0]); Session session = sessionManager.RequestSession(); session.beginTransaction(); MessageDetails fromdb = ( Message) session.load(Message.class, message[0].getMessageId()); assertEquals(fromdb.getMessageId(), message[0].getMessageId()); assertEquals(fromdb.getDateSent(),message.getDateSent()); assertEquals(fromdb.getGlobalStatus(),message.getGlobalStatus()); assertEquals(fromdb.getNumberOfPages(),message.getNumberOfPages()); } i'm having this error exception in constructor testMakePersistent(java.lang.NullPointerException at com.project.core.dao.hibernate.MessageDAOImplTest) with this stack : at com.project.core.dao.hibernate.MessageDAOImplTest.(MessageDAOImplTest.java:28) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at junit.framework.TestSuite.createTest(TestSuite.java:61) at junit.framework.TestSuite.addTestMethod(TestSuite.java:283) at junit.framework.TestSuite.(TestSuite.java:146) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:481) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1031) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:888) )) How to actually make this one work.I know this is a lot to stuffs and i'm thanking you for reading it.Please give me a solution.How would you do this? thanks

    Read the article

  • When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my serve

    - by sangfroid
    Hello all, When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my servers, but not all of them. Here is the block of code that's giving me trouble : <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core"%> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%> <c:set var="TEXTVARIABLE">|STRINGOFTEXT|</c:set> <c:set var="OTHERTEXTVARIABLE">${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}</c:set> And here is the exception : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. However, everything works fine if I change the URI for the core declaration to http://java.sun.com/jsp/jstl/core So here's the really weird part : for some reason, mixing 1.0 and 1.1 taglib declarations only causes an exception on two of my servers -- my staging server and my production server. It causes no problems at all on my local machine or my development server. Why is this? What could possibly be causing this difference in behavior? The three servers are extremely similar in setup and configuration. The JSP page is being served up by OpenCMS, and I'm using the Caucho's Resin webserver. I understand that you don't know how my servers or CMS are set up, but really, what I'm looking for is ideas. Any ideas at all would help -- this problem has been driving me absolutely batty. Even if you don't know what could be causing the problem, if you have any suggestions at all for how I could approach the problem, that would be extremely helpful. I just don't understand what could cause this difference in behavior between my servers. For reference, here's the full stack trace : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:369) at org.opencms.jsp.CmsJspTagInclude.includeTagAction(CmsJspTagInclude.java:241) at org.opencms.jsp.CmsJspTagInclude.doEndTag(CmsJspTagInclude.java:472) at _jsp._WEB_22dINF._jsp._online._system._modules.com_MYCOMPANY__marketing._templates._MAIN_0PAGE__jsp._jspService(_MAIN_0PAGE__jsp.java:153) at com.caucho.jsp.JavaPage.service(JavaPage.java:60) at com.caucho.jsp.Page.pageservice(Page.java:579) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:179) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.loader.CmsJspLoader.dispatchJsp(CmsJspLoader.java:1227) at org.opencms.loader.CmsJspLoader.load(CmsJspLoader.java:1171) at org.opencms.loader.A_CmsXmlDocumentLoader.load(A_CmsXmlDocumentLoader.java:232) at org.opencms.loader.CmsXmlContentLoader.load(CmsXmlContentLoader.java:52) at org.opencms.loader.CmsResourceManager.loadResource(CmsResourceManager.java:964) at org.opencms.main.OpenCmsCore.showResource(OpenCmsCore.java:1498) at org.opencms.main.OpenCmsServlet.doGet(OpenCmsServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:115) at javax.servlet.http.HttpServlet.service(HttpServlet.java:92) at com.caucho.server.dispatch.ServletFilterChain.doFilter(ServletFilterChain.java:106) at com.caucho.filters.CmsGzipFilter.doFilter(CmsGzipFilter.java:177) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:277) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:106) at com.caucho.server.dispatch.ForwardFilterChain.doFilter(ForwardFilterChain.java:80) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:207) at com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:173) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:274) at com.caucho.server.port.TcpConnection.run(TcpConnection.java:514) at com.caucho.util.ThreadPool.runTasks(ThreadPool.java:520) at com.caucho.util.ThreadPool.run(ThreadPool.java:442) at java.lang.Thread.run(Thread.java:595) Caused by: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at com.caucho.jsp.java.JspNode.error(JspNode.java:1489) at com.caucho.jsp.java.JspNode.error(JspNode.java:1480) at com.caucho.jsp.java.JavaJspGenerator.validate(JavaJspGenerator.java:466) at com.caucho.jsp.JspCompilerInstance.generate(JspCompilerInstance.java:475) at com.caucho.jsp.JspCompilerInstance.compile(JspCompilerInstance.java:373) at com.caucho.jsp.JspManager.compile(JspManager.java:233) at com.caucho.jsp.JspManager.createPage(JspManager.java:177) at com.caucho.jsp.JspManager.createPage(JspManager.java:157) at com.caucho.jsp.PageManager.getPage(PageManager.java:248) at com.caucho.jsp.PageManager.getPage(PageManager.java:166) at com.caucho.jsp.QServlet.getSubPage(QServlet.java:292) at com.caucho.jsp.QServlet.getPage(QServlet.java:210) at com.caucho.server.dispatch.PageFilterChain.compilePage(PageFilterChain.java:206) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:133) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:364) ... 45 more Thanks for the help!

    Read the article

  • Maven Release Plugin with JAXB issues

    - by Wysawyg
    Hiya, We've got a project set up to use the Maven Release Plugin which includes a phase that unpacks a JAR of XML schemas pulled from Artifactory and a phase that generates XJC classes. We're on maven release 2.2.1. Unfortunately the latter phase is executing before the former which means that it isn't generating the XJC classes for the schema. A partial POM.XML looks like: <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>unpack</id> <!-- phase>generate-sources</phase --> <goals> <goal>unpack</goal> <goal>copy</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>ourgroupid</groupId> <artifactId>ourschemas</artifactId> <version>5.1</version> <outputDirectory>${project.basedir}/src/main/webapp/xsd</outputDirectory> <excludes>META-INF/</excludes> <overWrite>true</overWrite> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>maven-buildnumber-plugin</artifactId> <version>0.9.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>true</doCheck> <doUpdate>true</doUpdate> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <configuration> <schemaDirectory>${project.basedir}/src/main/webapp/xsd</schemaDirectory> <schemaIncludes> <include>*.xsd</include> <include>*/*.xsd</include> </schemaIncludes> <verbose>true</verbose> <!-- args> <arg>-Djavax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema=org.apache.xerces.jaxp.validation.XMLSchemaFactory</arg> </args--> </configuration> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> I've tried googling for it, unfortunately I ended up with a case of thousands of links none of which were actually relevant so I'd be very grateful if someone knew how to configure the order of the release plugin steps to ensure a was fully executed before it did b. Thanks

    Read the article

  • ClassCastException happens when I use maven with tomcat plugin

    - by zjffdu
    Hi all, I try to use maven with tomcat plugin to develop a simple web application. But When I invoke the servlet, ClassCastException happens, this is the error message: java.lang.ClassCastException: "com.snda.dw.moniter.LogQueryServlet cannot be to javax.servlet.Servlet" But I already make com.snda.dw.moniter.LogQueryServlet extends HttpServlet, it should can be cast to avax.servlet.Servlet. The following is my pom.xml http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.snda dw.moniter war 0.0.1-SNAPSHOT dw.moniter Maven Webapp http://maven.apache.org junit junit 3.8.1 test <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>r07</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.6.1</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.1</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>0.20.2</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>com.snda</groupId> <artifactId>dw.common</artifactId> <version>1.0-SNAPSHOT</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>net.sf.flexjson</groupId> <artifactId>flexjson</artifactId> <version>2.1</version> <type>jar</type> <scope>compile</scope> </dependency> </dependencies> <build> <finalName>dw.moniter</finalName> <pluginManagement> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> <version>1.1</version> </plugin> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-maven-plugin</artifactId> <version>8.0.0.M2</version> </plugin> </plugins> </pluginManagement> </build>

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • IIS7.5 + Wordpress + Restrict Access to wp-login.php by client IP address

    - by JuanValdez
    I am moving from an Apache host to IIS. One of my sites in Wordpress (running Multi-site) which give me multiple blogs. I have moved all my rules from my .htaccess to the Microsoft URL ReWrite module. I have one section left that will not import. I want to restrict access to all instances of the file wp-login.php by Client IP address. In my .htaccess file I did the following: <Files wp-login.php> Order Deny,Allow Deny from all Allow from 192.168 </Files> Any smart ideas on how to accompish this in IIS7.5?

    Read the article

  • Enabling SSL on apache2 causes address already in use error

    - by durron597
    My server works just fine on a normal apache2 install. Now, I'm trying to install subversion on this server using this guide: http://alephzarro.com/blog/2007/01/07/installation-of-subversion-on-ubuntu-with-apache-ssl-and-basicauth/ I get the following error: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 When I do grep -rH 443 /etc/apache2/, I get results in two files: ports.conf and sites-enabled/default-ssl I tried it both with and without that last Listen 443 commented out, here's ports.conf: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> NameVirtualHost *:443 Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> #Listen 443 And the first few lines of default-ssl <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin webmaster@localhost SSLEngine on SSLCertificateFile /this/isnt/relevant/probably.pem SSLProtocol all SSLCipherSuite HIGH:MEDIUM And netstat -an --inet | grep 443 returns nothing. Any ideas?

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • Blogger.com kills FTP

    History (you can safely ignore) Back in 2002 I came across some (almost) free Linux/Apache space and set up my first manually-created HTML-based home page, which still exists: http://www.danielmoth.com/. In 2004 I wanted to have a blog that would be hosted on a sub-folder of my domain, and at the same time I did not want to mess with setting up a blog engine myself. I found the perfect solution in blogger.com, which offered a web interface for creating blog posts (and managing the pages' template)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • list of things to think about for hosting a potentially high traffic website

    - by SpashHit
    I do my own hosting for a few clients on my own VPS server (Lindode). Since my clients so far have been extremely low traffic, I have not had to really dig into some of the considerations that I would need for a higher traffic site. Now I am bidding on a client whose site will be potentially higher (not Facebook or twitter, but higher than Joe's ice cream shop). Is there a list of things I need to think about that I may be missing? I am going to assume, at least at first, that I will be able to handle them on my shared Linode, but I could move to a dedicated Linode if need be. I am not thinking so far of multiple servers, but short of that there are still considerations. For example, mod_perl instead of straight CGI, better backups, etc. What else? In case it matters, the stack will be debian-linux / apache / Perl / mysql / Template Toolkit.

    Read the article

  • Xampp : http://localhost/xampp/splash.php :english : nothing happens

    - by Oussama Abdallah
    After I start Xampp on my Ubuntu 12.04 I go to the localhost and I come to the site, where all languages are listed .... when I click any of these languages nothing happens. Follow steps are already done: I started Xampp with: sudo /opt/lamp/lamp start after doing so, I get in the console, that everything is running: Starting XAMPP for Linux ... XAMPP: Starting Apache...ok. XAMPP: Starting MySQL... ok. XAMPP: Starting ProFTPD...ok. Can you please help me solve this problem?

    Read the article

  • Game based on Ajax polling for 12 players

    - by Azincourt
    I am planning on writing a small browser game. The webserver is a shared server, with no root / install possible. I want to use AJAX for client/server communication. There will be 12 players. So each player would be polling the server for the current game status every X milliseconds (let's say 200ms). So that would be 200ms x 12 players x 5 = 60 requests per second Can Apache handle those requests? What might be the bottlenecks when using this attempt?

    Read the article

  • Cannot SSH to ubuntu server - openssh server owner changed

    - by Kshitiz Shankar
    I am using suPHP with Apache for virtual hosting but somewhere down the line my root ssh access is getting screwed up. I haven't been able to figure out why it is happening but eventually, my root user is not able to ssh to the server. I get this error: *** invalid open call: O_CREAT without mode ***: sshd: root@pts/3 terminated ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f12fe871817] /lib/x86_64-linux-gnu/libc.so.6(+0xeb7e1)[0x7f12fe8527e1] sshd: root@pts/3[0x41a542] sshd: root@pts/3[0x41a9eb] sshd: root@pts/3[0x41aeb8] sshd: root@pts/3[0x409630] sshd: root@pts/3[0x40f9ed] sshd: root@pts/3[0x410dd6] sshd: root@pts/3[0x411994] sshd: root@pts/3[0x411f16] sshd: root@pts/3[0x40b253] sshd: root@pts/3[0x42be24] sshd: root@pts/3[0x40c9cb] sshd: root@pts/3[0x412199] sshd: root@pts/3[0x4061a2] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f12fe78876d] sshd: root@pts/3[0x407635] ======= Memory map: ======== 00400000-00448000 r-xp 00000000 ca:02 4554758 /usr/sbin/sshd 00647000-00648000 r--p 00047000 ca:02 4554758 /usr/sbin/sshd 00648000-00649000 rw-p 00048000 ca:02 4554758 /usr/sbin/sshd 00649000-00750000 rw-p 00000000 00:00 0 01794000-017b5000 rw-p 00000000 00:00 0 [heap] 7f12fd5ad000-7f12fd5c2000 r-xp 00000000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd5c2000-7f12fd7c1000 ---p 00015000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c1000-7f12fd7c2000 r--p 00014000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c2000-7f12fd7c3000 rw-p 00015000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c3000-7f12fd7db000 r-xp 00000000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd7db000-7f12fd9db000 ---p 00018000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9db000-7f12fd9dc000 r--p 00018000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9dc000-7f12fd9dd000 rw-p 00019000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9dd000-7f12fd9df000 rw-p 00000000 00:00 0 7f12fd9df000-7f12fd9e6000 r-xp 00000000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fd9e6000-7f12fdbe5000 ---p 00007000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe5000-7f12fdbe6000 r--p 00006000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe6000-7f12fdbe7000 rw-p 00007000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe7000-7f12fdd27000 rw-s 00000000 00:04 6167294 /dev/zero (deleted) 7f12fdd27000-7f12fdd33000 r-xp 00000000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdd33000-7f12fdf32000 ---p 0000c000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf32000-7f12fdf33000 r--p 0000b000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf33000-7f12fdf34000 rw-p 0000c000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf34000-7f12fdf3e000 r-xp 00000000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fdf3e000-7f12fe13e000 ---p 0000a000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe13e000-7f12fe13f000 r--p 0000a000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe13f000-7f12fe140000 rw-p 0000b000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe140000-7f12fe157000 r-xp 00000000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe157000-7f12fe356000 ---p 00017000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe356000-7f12fe357000 r--p 00016000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe357000-7f12fe358000 rw-p 00017000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe358000-7f12fe35a000 rw-p 00000000 00:00 0 7f12fe35a000-7f12fe362000 r-xp 00000000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe362000-7f12fe561000 ---p 00008000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe561000-7f12fe562000 r--p 00007000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe562000-7f12fe563000 rw-p 00008000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe563000-7f12fe565000 r-xp 00000000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe565000-7f12fe765000 ---p 00002000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe765000-7f12fe766000 r--p 00002000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe766000-7f12fe767000 rw-p 00003000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe767000-7f12fe91c000 r-xp 00000000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12fe91c000-7f12feb1b000 ---p 001b5000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb1b000-7f12feb1f000 r--p 001b4000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb1f000-7f12feb21000 rw-p 001b8000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb21000-7f12feb26000 rw-p 00000000 00:00 0 7f12feb26000-7f12feb2f000 r-xp 00000000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12feb2f000-7f12fed2f000 ---p 00009000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed2f000-7f12fed30000 r--p 00009000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed30000-7f12fed31000 rw-p 0000a000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed31000-7f12fed5f000 rw-p 00000000 00:00 0 7f12fed5f000-7f12fef10000 r-xp 00000000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12fef10000-7f12ff110000 ---p 001b1000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff110000-7f12ff12b000 r--p 001b1000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff12b000-7f12ff136000 rw-p 001cc000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff136000-7f12ff13a000 rw-p 00000000 00:00 0 7f12ff13a000-7f12ff150000 r-xp 00000000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 7f12ff150000-7f12ff34f000 ---p 00016000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 7f12ff34f000-7f12ff350000 r--p 00015000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4Connection to stageserver.dockphp.com closed. After some debugging, I was able to narrow it down to a few things. For some reason the sshd daemon is running as root:www-data (apache user) instead of root. My ftp connection works but ssh over terminal fails. I have no idea whether it is getting caused due to suPHP or not (because that is the only place where user permission's etc. change). I really need to narrow it down and fix it asap. Thanks a lot!

    Read the article

  • What must one know when approaching web development?

    - by Tal Koren
    I just started working as a novice Web Developer. I know PHP pretty well, as well as some basic jQuery. Anyway, my boss told me I should explore and learn about MVC, Memcache, Design Patterns, how Apache servers work and how to set one up etc. What I want to ask is actually this: What should I learn further? Web Development is a big area and most odds are that I'll never stop learning, but what are the basics I should learn about? What are the fundamentals? Currently I'm focusing on Server Side Development, but a very big part of me also wants to become a front-end ninja, so please consider that in your comments. Thanks in advance, you rock. :)

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Adam Bien Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Adam Bien, a self-employed enterprise Java consultant, an author of five star-rated books, a presenter, a Java Champion, a NetBeans Dream Team member, a JCP member, a JCP Expert Group Member of several Java EE groups, and with several other titles is one of the most vocal advocate of the Java EE platform. His code-driven workshops using Java EE 6, NetBeans, and GlassFish have won accolades at several developers' conferences all around the world. Adam has been using GlassFish for all his projects for many years. One of the reasons he uses GlassFish is because of high confidence that the Java EE compliance bug will be fixed faster. He find GlassFish very capable application server for faster development and continuous deployment. His own media properties are running on GlassFish with an Apache front-end. Good documentation, accessible source code, REST/Web/CLI administration and monitoring facilities are some other reasons to pick GlassFish. He presented at the recently concluded GlassFish community event at JavaOne 2012. You can watch the video (with transcript) below showing him in full action:

    Read the article

  • Deleting old tomcat version and setting a new one

    - by Diego
    I had Apache Tomcat installed by apt-get, however I decided to get a newer one, performed apt-get remove tomcat7 and apt-get purge tomcat7. I installed a newer one means the bundled Tomcat Server in NetBeans install. However, Im still seeing the old fashioned page from former Tomcat install: It works ! If you're seeing this page via a web browser, it means you've setup Tomcat successfully. Congratulations! This is the default Tomcat home page. It can be found on the local filesystem at: /var/lib/tomcat7/webapps/ROOT/index.html I already set a different port in the server.xml file and whenever I go that site after executing the startup.sh file with sudo permissions I'm not getting any site like server (new one) isn't running. How can I still be getting the page from old Tomcat install!? When I execute the startup.sh log says all is set OK, so why isn't it working?

    Read the article

  • Securely sending data from shared hosted PHP script to local MSSQL

    - by user329488
    I'm trying to add data from a webhook (from a web cart) to a local Microsoft SQL Server. It seems like the best route for me is to use a PHP script to listen for new data (POST as json), parse it, then query to add to MSSQL. I'm not familiar with security concerning the connection between the PHP script (which would sit on a shared-host website) and the local MSSQL database. I would just keep the PHP script running on the same localhost (have Apache running on Windows), but the URI for the webhook needs to be publicly accessible. Alternately, I assume that I could just schedule a script from the localhost to check periodically for updates through the web carts API, though the webhooks seem to be more fool-proof for an amateur programmer like myself. What steps can I take to ensure security when using a PHP on a remote, shared-host to connect to MSSQL on my local machine?

    Read the article

  • .htaccess non-www to www rule seems to work but the URL isn't changing in the address bar

    - by SnakeByte
    On a joomla site, apache, shared hosting, I'm using next .htaccess rule: RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] The problem is that the browser's address bar text does not change from example.com to www.example.com. It seems the redirect actually works because all the links on the pages are changed to www. And after clicking on any link from there it always continues to have www added. The problem is the first page (no matter which one) that is loaded using browser's address bar - like example.com or example.com/random-page. Solved. The redirect actually works.

    Read the article

  • Web Site Development Software for VirtualBox Ubuntu 11.04

    - by Paul Sonier
    I'm doing some development of a website on a VirtualBox guest running Ubuntu 11.04 (host is running Windows, but I want to do the web development in a Linux environment). My development languages are primarily PHP and Javascript (using Apache and node.js). The question is this: is there a good IDE for work under these conditions that can handle running virtualized? I tried Eclipse, and was not particularly thrilled with the performance; I'm wondering if there's some other way to do this than to do all my text editing in emacs.

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • #1045 Cannot log in to the MySQL server

    - by user1198291
    I am totally new in linux/ubuntu I am trying to setup lamp on my OS, however I've installed apache , php , mysql by following commands: sudo apt-get install apache2 sudo apt-get install php5 sudo apt-get install libapache2-mod-php5 sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql sudo apt-get install phpmyadmin everything works fine except that i totally cannot log into MySQL(which leads to phpmyadmin failure login) getting the errors : #1045 Cannot log in to the MySQL server Access denied for user 'root'@'localhost' (using password: YES) I googled the problem and also I have tried to reinstall all installed components, but the same result came up! in windows i usually modified the content of mysql configure file but in ubuntu nothing is as same as windows!:) can anybody help me on this, really need to setup lampp :-S thanks in advanced

    Read the article

  • website address point to localhost

    - by munir ahmad
    Today in firefox while surfing the internet, I opened a website it asked "This site may harm your computer" if you want to open this website add it in exception list. I added a website under exception list and trust this website. After that situation, whenever I opend this website, it always points toward the localhost untill internet connected. I have setup localhost through apache (xampp server). If internet not connected this website do not open anything but localhost still work as usaual. How can I remove this situation so that this website do not point to locathost? I am using winxp sp3. Same problem now appear in all browsers too.

    Read the article

  • problem connecting to magento connect

    - by amir
    hi, I'm using magento 1.4.0 and when I try to get to magento connect and download a plugin the page will say Error: Please check for sufficient write file permissions Your Magento folder does not have sufficient write permissions, which this web based downloader requires. If you wish to proceed downloading Magento packages online, please set all Magento folders to have writable permission for the web server user (example: apache) and press the "Refresh" button to try again. does anyone know how I can fix this problem, thanks Update: the plugin I'm trying to use is MagentoPycho light box so I unpacked the folder into the app/code/local but it still doesn't show in the admin area

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >