Search Results

Search found 10610 results on 425 pages for 'apache hadoop'.

Page 190/425 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • org.apache.commons.httpclient.NameValuePair in post method

    - by pushkins
    I'm writing some code like : PostMethod p = new PostMethod(someurl); ... NameValuePair[] data = { new NameValuePair("name1", "somevalue1"), new NameValuePair("var[3][1]", "10") }; try { hc.executeMethod(p); } ... And that's what I get, when I look at my post in Wireshark: POST /someurl HTTP/1.1 ... type=var&ship%5B3%5D%5B1%5D=10 %5B means [, %5D- ] So the problem is how I can get square brackets in my post?

    Read the article

  • Internet Explorer 8 + Deflate

    - by Andreas Bonini
    I have a very weird problem.. I really do hope someone has an answer because I wouldn't know where else to ask. I am writing a cgi application in C++ which is executed by Apache and outputs HTML code. I am compressing the HTML output myself - from within my C++ application - since my web host doesn't support mod_deflate for some reason. I tested this with Firefox 2, Firefox 3, Opera 9, Opera 10, Google Chrome, Safari, IE6, IE7, IE8, even wget.. It works with ANYTHING except IE8. IE8 just says "Internet Explorer cannot display the webpage", with no information whatsoever. I know it's because of the compression only because it works if I disable it. Do you know what I'm doing wrong? I use zlib to compress it, and the exact code is: /* Compress it */ int compressed_output_size = content.length() + (content.length() * 0.2) + 16; char *compressed_output = (char *)Alloc(compressed_output_size); int compressed_output_length; Compress(compressed_output, compressed_output_size, (void *)content.c_str(), content.length(), &compressed_output_length); /* Send the compressed header */ cout << "Content-Encoding: deflate\r\n"; cout << boost::format("Content-Length: %d\r\n") % compressed_output_length; cgiHeaderContentType("text/html"); cout.write(compressed_output, compressed_output_length); static void Compress(void *to, size_t to_size, void *from, size_t from_size, int *final_size) { int ret; z_stream stream; stream.zalloc = Z_NULL; stream.zfree = Z_NULL; stream.opaque = Z_NULL; if ((ret = deflateInit(&stream, CompressionSpeed)) != Z_OK) COMPRESSION_ERROR("deflateInit() failed: %d", ret); stream.next_out = (Bytef *)to; stream.avail_out = (uInt)to_size; stream.next_in = (Bytef *)from; stream.avail_in = (uInt)from_size; if ((ret = deflate(&stream, Z_NO_FLUSH)) != Z_OK) COMPRESSION_ERROR("deflate() failed: %d", ret); if (stream.avail_in != 0) COMPRESSION_ERROR("stream.avail_in is not 0 (it's %d)", stream.avail_in); if ((ret = deflate(&stream, Z_FINISH)) != Z_STREAM_END) COMPRESSION_ERROR("deflate() failed: %d", ret); if ((ret = deflateEnd(&stream)) != Z_OK) COMPRESSION_ERROR("deflateEnd() failed: %d", ret); if (final_size) *final_size = stream.total_out; return; }

    Read the article

  • will php/apache ever support multi threading?

    - by fayer
    i mainly focus on the web, i think i will never create desktop applications. so i think it's better for me to focus on typical web languages like php. i know an advantage java has over php is multi threading though. will php ever support this feature in the future? thanks

    Read the article

  • Apache server-side files caching via .htaccess?

    - by purpler
    Hi, I'm starting new website and gonna include several JS libs and would like to know how .htaccess file template should look like with caching of media and JS files on? Whats better for compression, GZip or Deflate? Is it better/faster solution to serve those JS libs of the Google CDN perhaps then locally? I'm asking CDN question since some of scripts served off GoogleCDN are potentially going to update and eventually break the website layout so i thought it would be better for me to host them locally and cache via webserver if its going to work with same/near-same speed.

    Read the article

  • Apache thrift, struct contain itself

    - by mamu
    I am looking into thrift for serialization of data. But Document says cyclic structs - Structs can only contain structs that have been declared before it. A struct also cannot contain itself One of our requirement is Struct A List of Child items Items(Items are Struct A ) So reading requirement i can't have Struct within itself at any level? can i have it in cyclic model as i have it above. Struct is not member of Struct directly but it has some other member and it contains struct. Their document is not so well descriptive. Is it possible in Thrift? Does protobuf supports it?

    Read the article

  • how to remove .php from a certain file (with apache .htaccess)

    - by user2015253
    I want a certain file (only this!) to remove the php extension for calling it. BUT: It uses get parameters and they should remain! I tried it with my .htaccess and something like: RewriteEngine on followed by either RewriteRule ^file(.*)$ file.php$1 or RewriteRule ^file(.+)$ file.php$1 but it doesn't work. (First gives error 500, second gives 404) Example what I want to call in the browser: file?param=asd&foo=bar - It should call as: file.php?param=asd&foo=bar

    Read the article

  • Any way to chunk gzip with Apache and PHP

    - by donatJ
    I have a web application on a site that takes a while (~10 seconds) to complete a portion of the page near the bottom - it has been as optimized as it can be, and caching is not an option. We have compression enabled on the server via an .htaccess directive SetOutputFilter DEFLATE the problem is this causes the whole page to be held until completion before it starts outputting to the user, this is not optimal as the user sees nothing until the page completes. I have also tried it via the php ob_start("ob_gzhandler"); method. Currently I have a <FilesMatch > in my .htaccess restricting this specific script from being compressed. Basically my question is this - Is there a way to say chunk gzip or deflate so that the user gets it in pieces, so they can see that the page has begun loading?

    Read the article

  • PHP.ini Settings Are Ignored By PHP5.3.5 Running With Windows 7 And Apache 2.2.15

    - by Andy
    I did an install of PHP5.3.5 on Windows 7 Home Premium using the MSI installer download. I got it to overwrite a previous version of PHP5 in C:\php5\ When first testing it, the server failed to start. I fixed this by adding the path to PHP in the Apache2.2 httpd file where the installer had inserted 2 lines of coded pointing to the ini file directory and the PHP DLL but had left out the directory path. After doing this, the server starts ok and I can run phpinfo to view the PHP settings in my web browser on local host. In the phpinfo it states that the loaded configuration file is C:\php5\php.ini as expected. But if I make any changes to the settings, and reboot the server, none of the changes are reflected in phpinfo. Yes, I do refresh the browser window. If I rename the php.ini to something else to make it invisible phpinfo then correctly identifies that there is no php.ini file loaded. So the settings in php.ini are being ignored and some default settings are being used (but I have no idea where these are derived from). As far as I can tell, there are no other php.ini files on my computer. In phpinfo it states that the Configuration File (php.ini) Path is C:\Windows but this is the same as on a Windows XP computer that I work on. And in the windows folder I don't see any php.ini file. In the windows registry, there is no mention of PHP5, and the PATH environment variable starts with C:\php5\; So hopefully someone can suggest how I can get PHP5 to take notice of the C:\php5\php.ini settings. :)

    Read the article

  • How to create a database deadlock using jdbc and JUNIT

    - by Isawpalmetto
    I am trying to create a database deadlock and I am using JUnit. I have two concurrent tests running which are both updating the same row in a table over and over again in a loop. My idea is that you update say row A in Table A and then row B in Table B over and over again in one test. Then at the same time you update row B table B and then row A Table A over and over again. From my understanding this should eventually result in a deadlock. Here is the code For the first test. public static void testEditCC() { try{ int rows = 0; int counter = 0; int large=10000000; Connection c=DataBase.getConnection(); while(counter<large) { int pid = 87855; int cCode = 655; String newCountry="Egypt"; int bpl = 0; stmt = c.createStatement(); rows = stmt.executeUpdate("UPDATE main " + //create lock on main table "SET BPL="+cCode+ "WHERE ID="+pid); rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table counter++; } assertTrue(rows == 1); //rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')"); } catch(SQLException ex) { ex.printStackTrace(); //ex.getMessage(); } } And here is the code for the second test. public static void testEditCC() { try{ int rows = 0; int counter = 0; int large=10000000; Connection c=DataBase.getConnection(); while(counter<large) { int pid = 87855; int cCode = 655; String newCountry="Jordan"; int bpl = 0; stmt = c.createStatement(); //stmt.close(); rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table rows = stmt.executeUpdate("UPDATE main " + //create lock on main table "SET BPL="+cCode+ "WHERE ID="+pid); counter++; } assertTrue(rows == 1); //rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')"); } catch(SQLException ex) { ex.printStackTrace(); } } I am running these two separate JUnit tests at the same time and am connecting to an apache Derby database that I am running in network mode within Eclipse. Can anyone help me figure out why a deadlock is not occurring? Perhaps I am using JUnit wrong.

    Read the article

  • Handling Apache Thrift list/map Return Types in C++

    - by initzero
    First off, I'll say I'm not the most competent C++ programmer, but I'm learning, and enjoying the power of Thrift. I've implemented a Thrift Service with some basic functions that return void, i32, and list. I'm using a Python client controlled by a Django web app to make RPC calls and it works pretty well. The generated code is pretty straight forward, except for list returns: namespace cpp Remote enum N_PROTO { N_TCP, N_UDP, N_ANY } service Rcon { i32 ping() i32 KillFlows() i32 RestartDispatch() i32 PrintActiveFlows() i32 PrintActiveListeners(1:i32 proto) list<string> ListAllFlows() } The generated signatures from Rcon.h: int32_t ping(); int32_t KillFlows(); int32_t RestartDispatch(); int32_t PrintActiveFlows(); int32_t PrintActiveListeners(const int32_t proto); int64_t ListenerBytesReceived(const int32_t id); void ListAllFlows(std::vector<std::string> & _return); As you see, the ListAllFlows() function generated takes a reference to a vector of strings. I guess I expect it to return a vector of strings as laid out in the .thrift description. I'm wondering if I am meant to provide the function a vector of strings to modify and then Thrift will handle returning it to my client despite the function returning void. I can find absolutely no resources or example usages of Thrift list< types in C++. Any guidance would be appreciated.

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • Serving web application without Lighttpd/Apache

    - by Shyam
    Hi, As Rails applications default run on port 3000, would it be possible to start the application on port 80? Is it really required to have a fastcgi/mod_proxy enabled web server in front? My users won't be more than three at a time. If so, how would I be able to do so? Thanks!

    Read the article

  • apache solr auto suggestions

    - by Pydev UA
    I use solr+django-haystack I set settings.HAYSTACK_INCLUDE_SPELLING = True and rebuild index I'm trying to get any suggestion using: SearchQuerySet().auto_query('tryng ani word her').spelling_suggestion() But I always get None What should I do to get at least one working suggestion ? may be I need add some configuration into solr config or have some specific data indexed ?

    Read the article

  • Ajax gets nothing back from the php.

    - by ShaMun
    Jquery i dont have alert and firefox i dont have anything in return. The code was working before, database query have successfull records also. What i am missing??? Jquery ajax. $.ajax({ type : "POST", url : "include/add_edit_del.php?model=teksten_display", data : "oper=search&ids=" + _id , dataType: "json", success : function(msg){ alert(msg); } }); PHP case 'teksten_display': $id = $_REQUEST['ids']; $res = $_dclass-_query_sql( "select a,b,id,wat,c,d from tb1 where id='" . $id . "'" ); $_rows = array(); while ( $rows = mysql_fetch_array ($res) ) { $_rows = $rows; } //header('Cache-Control: no-cache, must-revalidate'); //header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Content-type: application/json'); echo utf8_encode( json_encode($_rows) ) ; //echo json_encode($_rows); //var_dump($_rows); //print_r ($res); break; Firefox response/request header Date Sat, 24 Apr 2010 22:34:55 GMT Server Apache/2.2.3 (CentOS) X-Powered-By PHP/5.1.6 Expires Thu, 19 Nov 1981 08:52:00 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Content-Length 0 Connection close Content-Type application/json Host www.xxxx.be User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-2.fc12 Firefox/3.5.9 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://www.xxxx.be/xxxxx Content-Length 17 Cookie csdb=2; codb=5; csdbb=1; codca=1.4; csdca=3; PHPSESSID=benunvkpecqh3pmd8oep5b55t7; CAKEPHP=3t7hrlc89emvg1hfsc45gs2bl2

    Read the article

  • apache Digester: Unexspected NoSuchMethodException on addSetNestedProperties

    - by qstorm
    Hi, I have a problem using Digester and I hope you can help me. I have the following Bean: public class MyEntry { private String entityID; public String getEntityID() { return this.entityID; } public void setEntityID(final String entityID) { this.entityID = entityID; } } And the following XML structure: <entries> <entry> <MyID> 24309LAGH1 </MyID> </entry> </entries> I use the addSetNestedProperties(…) method of the digester API: digester.addSetNestedProperties("entries/entry", "MyID", "entryID"); The following exception occurs: java.lang.NoSuchMethodException: Bean has no property named MyID Why is digester searching for a property named “MyID”? I specified “entryID” as bean property accorsing to the digester API Thanks :) Best regards QStorm

    Read the article

  • Controlling new features in Spring/Apache tomcat

    - by HeretoLearn
    Is there a good way to manage roll out of new software in Spring/Tomcat configuration. The problem I am trying to address is being able to rollback newly deployed software provided there are no hard dependencies on other moving pieces. For example changed the guts of an algorithm and now want to deploy the new software with the new change off by default and then turning it on on one of the web servers. An obvious way is to build a session object which has a map of key-value pairs(property,bool) which is populated from a db on a session startup and then the value is persisted for that session. This is so that if the value is reverted back existing sessions which started with the original value retain the original value and giving application consistency. Is there a more obvious inbuilt mechanism to do this?

    Read the article

  • Apache htaccess Zend redirecting excepting some fodlers

    - by Frederick Marcoux
    Last week, I remade all of my website using the famous Zend Framework and now, I'm starting worrying about it... I'm trying to make an administration zone within a subfolder (also ZF) and a API Zend Application for my mobile Android application. The problem is: I rewrited all routes im my principal website, so now it always search for a route when I go to a subfolder. There's my root folder .htaccess: RewriteEngine On RewriteRule ^.htaccess$ - [F] RewriteCond %{REQUEST_URI}!^/api/ RewriteCond %{REQUEST_URI}!^/admin/ RewriteRule ^public/.*$ /public/index.php [NC,L] RewriteRule ^(.*)$ /public/$1 [NC,L] The way I want it is that: URL: {domain}/ => ./public/index.php (where's my current ZF app) URL: {domain}/[admin|api] => ./[admin/|api]/public/index.php (the others app) {domain} = my TLD; [admin|api] the requested folder So, in simple: Request = /api => /api Request = /admin => /admin Request = {anything else} => /public/index.php I searched a lot on SO and also on Google but I didn't find anything working -_-

    Read the article

  • MySQL, PHP, and Apache errors while connecting to DB

    - by cypherdelton
    I'm having two errors when I test the "mysql_connect()" function in php. I just installed PHP and MySQL from scratch. These errors may be related, so I will post them both here: Warning: mysql_connect() [function.mysql-connect]: Headers and client library minor version mismatch. Headers:50158 Library:50518 in /usr/local/apache2/htdocs/index.php on line 6 Warning: mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in /usr/local/apache2/htdocs/index.php on line 6 Many websites say to start the mysql server if you are getting error #2. Whenever I execute the command mysqld start --user=mysql I get the error "mysqld: Too many arguments (first extra is 'start'); Adding the "&" to the end of the command makes no difference (I don't know if it is supposed to). Thanks, Cypher Delton

    Read the article

  • Can I have both apache and nginx serve my websites?

    - by Marconi
    I have an existing apache serving a couple of sites. Now I have a new django site that's ajax intensive and so I'm planning to run it on apache's mod_wsgi but I'll use nginx as a reverse proxy. Is it possible to have nginx be a reverse proxy to this new django site while make apache serve the other sites directly without using nginx? Also if you can give me a rough setup on how I might do it if its possible.

    Read the article

  • Apache FileUtils listFiles

    - by Marquinio
    Hey everyone I'm trying to get a List of directories. I'm using FileUtils listFiles(). I want to do something like this: listFiles(File,IOFileFilter,false). My real questions is how I can implement the accept() from the IOFileFilter so I can check if current File is a directory? Thank you in advance.

    Read the article

  • null pointer exception at org.hibernate.tuple.AbstractEntityTuplizer.createProxy

    - by saurabh
    I am using hibernate 3.2 with struts 1.2 framework I got this exception when i m trying to load the object I am using this code to load the object public Currentprofile findById(java.lang.String id) { log.debug("getting Currentprofile instance with id: " + id); try { Currentprofile instance = (Currentprofile) getSession().get( "com.hibermappings.Currentprofile", id); return instance; } catch (RuntimeException re) { log.error("get failed", re); throw re; } } my hbm file is this <one-to-one name="referenceDb" lazy="proxy" class="com.hibermappings.ReferenceDb" cascade="all" constrained="false" /> <one-to-one name="registration" lazy="proxy" class="com.hibermappings.Registration" cascade="all" constrained="false" /> <one-to-one name="jobseekerpackagedetails" lazy="proxy" class="com.hibermappings.Jobseekerpackagedetails" cascade="all" constrained="false" /> <property name="keyWords" type="java.lang.String"> <column name="keyWords" length="5000" /> </property> <property name="totalExp" type="java.lang.String"> <column name="totalExp" length="100" /> </property> <property name="hqualification" type="java.lang.String"> <column name="hQualification" length="100" /> </property> <property name="preferedLocation" type="java.lang.String"> <column name="preferedLocation" length="100" /> </property> <property name="functionalArea" type="java.lang.String"> <column name="functionalArea" length="1000" /> </property> <property name="expSalary" type="java.lang.String"> <column name="expSalary" length="100" /> </property> <property name="designation" type="java.lang.String"> <column name="designation" length="100" /> </property> <property name="resumeTitle" type="java.lang.String"> <column name="resumeTitle" length="500" /> </property> <property name="profileDetails" type="java.lang.String"> <column name="profileDetails" length="65535" /> </property> <property name="requiredProfile" type="java.lang.String"> <column name="requiredProfile" length="65535" /> </property> <property name="activatedOn" type="java.util.Date"> <column name="activatedOn" length="0" /> </property> <set name="resumes" inverse="true" cascade="save-update"> <key> <column name="jobseekerId" length="50" /> </key> <one-to-many class="com.hibermappings.Resume" /> </set> </class> the same code runs well when I m using in a simple java class within main method .. full stack trace of exception is java.lang.NullPointerException at org.hibernate.tuple.AbstractEntityTuplizer.createProxy(AbstractEntityTuplizer.java:372) at org.hibernate.persister.entity.AbstractEntityPersister.createProxy(AbstractEntityPersister.java:3121) at org.hibernate.event.def.DefaultLoadEventListener.createProxyIfNecessary(DefaultLoadEventListener.java:232) at org.hibernate.event.def.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:173) at org.hibernate.event.def.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:87) at org.hibernate.impl.SessionImpl.fireLoad(SessionImpl.java:862) at org.hibernate.impl.SessionImpl.load(SessionImpl.java:781) at org.hibernate.impl.SessionImpl.load(SessionImpl.java:774) at com.DAOs.CurrentprofileDAO.getLoad(CurrentprofileDAO.java:71) at com.action.JobSekeerManage.viewProfile(JobSekeerManage.java:447) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:270) at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:187) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:689) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at com.filter.HibernateFilter.doFilter(HibernateFilter.java:24) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:825) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:731) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:526) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684) at java.lang.Thread.run(Thread.java:595) Error::null

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >