Search Results

Search found 23233 results on 930 pages for 'feature request'.

Page 251/930 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Java HttpURLConnection class Program

    - by pandu
    I am learning java. Here is the sample code of HttpURLConnection class usage in some text book import java.net.*; import java.io.*; import java.util.*; class HttpURLDemo { public static void main(String args[]) throws Exception { URL hp = new URL("http://www.google.com"); HttpURLConnection hpCon = (HttpURLConnection) hp.openConnection(); // Display request method. System.out.println("Request method is " + hpCon.getRequestMethod()); // Display response code. System.out.println("Response code is " + hpCon.getResponseCode()); // Display response message. System.out.println("Response Message is " + hpCon.getResponseMessage()); // Get a list of the header fields and a set // of the header keys. Map<String, List<String>> hdrMap = hpCon.getHeaderFields(); Set<String> hdrField = hdrMap.keySet(); System.out.println("\nHere is the header:"); // Display all header keys and values. for(String k : hdrField) { System.out.println("Key: " + k + " Value: " + hdrMap.get(k)); } } } Question is Why hpCon Object is declared in the following way? HttpURLConnection hpCon = (HttpURLConnection) hp.openConnection(); instead of declaring like this HttpURLConnection hpCon = new HttpURLConnection(); Author provided the following explanation. I cant understand Java provides a subclass of URLConnection that provides support for HTTP connections. This class is called HttpURLConnection. You obtain an HttpURLConnection in the same way just shown, by calling openConnection( ) on a URL object, but you must cast the result to HttpURLConnection. (Of course, you must make sure that you are actually opening an HTTP connection.) Once you have obtained a reference to an HttpURLConnection object, you can use any of the methods inherited from URLConnection

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

  • links for 2010-06-04

    - by Bob Rhubart
    @biemond: JEJB Transport and manipulating the Java Response in OSB 11g "JEJB Transport works like the EJB Transport," says Oracle ACE Edwin Biemond, "but the request and response objects are not translated to XML so you can't use XQuery etc. To make things not too hard, OSB 11g makes a XML presentation of the request method and its parameters, which you can use in the Proxy Service." (tags: oracleace soa oracle jejb java) @bex: Oracle UCM jQuery Plugin  "This connector allows you to use jQuery to make UCM Service calls through AJAX, and easily display the results,: says Oracle Ace Director Bex Huff. "This is 100% pure JavaScript, no Java, Idoc, or ADF required!" (tags: oracleace ucm oracle otn enterprise2.0) Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available (Oracle Developer Tools Blog) "Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future," says Pieter Humphrey. (tags: oracle otn solaris sparc liunux) @soatoday: EA and SOA Should Report to COO "So, who gets EA-- the CIO or VP of a Business? I argue neither! After all, a typical EA goal is to connect the Business and IT together to impart better structure and visibility across the enterprise. I firmly believe that neither should own EA so that neither imparts too much of their organization (i.e bias) on the EA process and deliverables. EA needs to be independent, and it's for all the right reasons." -- Orace ACE Director JOrdan Braunstein (tags: oracleace entarch soa)

    Read the article

  • As webdevelopment is it same to legal issues to make a sex dating sites?

    - by YumYumYum
    Like i have created many other normal sites which are not related to any dating/sexual content. Is it for a developer same rules and regulation while making a sex related dating sites? where people meet together, learn each others, for having a sex relaionship (you know what i mean), having also a feature of webcam sex but not explicitly a porno sites. Does those sites have any special legal terms and condition's for the developers comparing with non sexual/dating sites legal terms and conditions?

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Danger from the Deep

    <b>Linux Journal:</b> "If you remember my December Linux Journal column, I was excited about a particularly cool-looking submarine simulator, Danger from the Deep. This month, I'm proud to feature it."

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

  • LDAP installed, running, but can't connect remotely [Ubuntu 10.10]

    - by Casey Jordan
    Hi all, I installed LDAP on my ubuntu 10.10 system, using the tutorial found here: https://help.ubuntu.com/10.10/serverguide/C/openldap-server.html Everything seems to be working well, when logged into the server via ssh I can run commands like: > ldapsearch -xLLL -b "dc=easydita,dc=com" uid=john sn givenName cn dn: uid=john,ou=people,dc=easydita,dc=com sn: Doe givenName: John cn: John Doe So I think that's a good sign that things are working well. However I have had zero luck connecting to the server remotely via GUI tools or command line. I have tied JXplorer, and LDAP administration tool. Running commands like this: > ldapsearch -xLLL -W -H ldap://ice.rit.edu -d1 "dc=easydita,dc=com" ldap_url_parse_ext(ldap://ice.rit.edu) ldap_create ldap_url_parse_ext(ldap://ice.rit.edu:389/??base) Enter LDAP Password: ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP ice.rit.edu:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 34 bytes to sd 3 ldap_result ld 0xb8940170 msgid 1 wait4msg ld 0xb8940170 msgid 1 (infinite timeout) wait4msg continue ld 0xb8940170 msgid 1 all 1 ** ld 0xb8940170 Connections: * host: ice.rit.edu port: 389 (default) refcnt: 2 status: Connected last used: Thu Mar 17 19:42:29 2011 ** ld 0xb8940170 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0xb8940170 request count 1 (abandoned 0) ** ld 0xb8940170 Response Queue: Empty ld 0xb8940170 response count 0 ldap_chkResponseList ld 0xb8940170 msgid 1 all 1 ldap_chkResponseList returns ld 0xb8940170 NULL ldap_int_select read1msg: ld 0xb8940170 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 16 contents: read1msg: ld 0xb8940170 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0xb8940170 0 new referrals read1msg: mark request completed, ld 0xb8940170 msgid 1 request done: ld 0xb8940170 msgid 1 res_errno: 49, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_bind: Invalid credentials (49) I am pretty sure that I set up the admin password correctly, but the tutorial was not very specific about that. (Also could not find instructions on how to reset admin password.) Additional info: I was told that this file might hold important information so I will post it: /etc/ldap/slapd.d/cn=config/olcDatabase={0}config.ldif dn: olcDatabase={0}config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=cn=localroot,cn=config manage by * break olcRootDN: cn=admin,cn=config structuralObjectClass: olcDatabaseConfig entryUUID: eca09490-e524-102f-87c5-17d7a82e8985 creatorsName: cn=config createTimestamp: 20110317205733Z entryCSN: 20110317205733.193089Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20110317205733Z Given that it seems I have this almost set up correctly is there any steps I can take to correct this? Thanks, Casey

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • VB like with keywords in C#

    - by Tanzim Saqib
    AspectF is an open source utility which offers separation of concerns in fluent way. I am personally a big fan as well as contributor of this project. It is very simple, easy to implement, and an excellent way to incorporate regular everyday logics into your business code from one single class, AspectF. I have added couple of new features to it, which are yet to be committed to the source control. However, here’s one feature that I have introduced today is to be able to write VB-like with keyword...(read more)

    Read the article

  • Apache redirecting: reason unknown

    - by Sinan
    I have a simple php script. The script is not important. It just prints out $_SERVER. When I request an URL like www.server.com/?ref=bar everything is fine. However if the request contains something like www.server.com/?ref=http://www.test.com (?ref=http%3A%2F%2Fwww.test.com) the server redirects to 403.shtml. No redirect for http://x but redirects http://x.y As far as I can understand somehow the server doesn't like "http://x". It always redirects to 403.shtml when there is a valid query string in the form of a valid url. my .htacess file is the same both on my server and local test server and local test server behaves as expected (no redirects). So I don't it is related to .htaccess. I'm on shared host on Hostgator. Can anyone help? Edit: Here's the .htaccess file Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L,QSA] When there is an http://xx.x it redirects to 403 even if there is physical file. However if I remove the .htaccess redirect to 403 also disappears. But I need the above .htaccess file. Is there a way to get around this?

    Read the article

  • Why can't I configure my touchpad?

    - by Jorge Castro
    I want to disable my touchpad's "tap to click" feature. Searching for this on the internet comes up with suggestions to shut it off in the "Mouse" preferences, however I am missing the touchpad tab that people keep talking about: The closest official documentation I've been able to find is here. The touchpad also doesn't show up in gpointing-device-settings. Unchecking tap_to_click in the /desktop/gnome/peripherals/touchpad/ gconf key doesn't seem to do anything.

    Read the article

  • Using SET NULL and SET DEFAULT with Foreign Key Constraints

    Cascading Updates and Deletes, introduced with SQL Server 2000, were such an important, crucial feature that it is hard to imagine providing referential integrity without them. One of the new features in SQL Server 2005 that hasn't gotten a lot of press from what I've read is the new options for the ON DELETE and ON UPDATE clauses: SET NULL and SET DEFAULT. Let's take a look!

    Read the article

  • How to convince my boss that quality is a good thing to have in code?

    - by Kristof Claes
    My boss came to me today to ask me if we could implement a certain feature in 1.5 days. I had a look at it and told him that 2 to 3 days would be more realistic. He then asked me: "And what if we do it quick and dirty?" I asked him to explain what he meant with "quick and dirty". It turns out, he wants us to write code as quickly as humanly possible by (for example) copying bits and pieces from other projects, putting all code in the code-behind of the WebForms pages, stop caring about DRY and SOLID and assuming that the code and functionalities will never ever have to be modified or changed. What's even worse, he doesn't want us do it for just this one feature, but for all the code we write. We can make more profit when we do things quick and dirty. Clients don't want to pay for you taking into account that something might change in the future. The profits for us are in delivering code as quick as possible. As long as the application does what it needs to do, the quality of the code doesn't matter. They never see the code. I have tried to convince him that this is a bad way to think as the manager of a software company, but he just wouldn't listen to my arguments: Developer motivation: I explained that it is hard to keep developers motivated when they are constantly under pressure of unrealistic deadlines and budget to write sloppy code very quickly. Readability: When a project gets passed on to another developer, cleaner and better structured code will be easier to read and understand. Maintainability: It is easier, safer and less time consuming to adapt, extend or change well written code. Testability: It is usually easier to test and find bugs in clean code. My co-workers are as baffled as I am by my boss' standpoint, but we can't seem to get to him. He keeps on saying that by making things more quickly, we can sell more projects, ask a lower price for them while still making a bigger profit. And in the end these projects pay the developer's salaries. What more can I say to make him see he is wrong? I want to buy him copies of Peopleware and The Mythical Man-Month, but I have a feeling they won't change his mind either. A lot of you will probably say something like "Run! Get out of there now!" or "I'd quit!", but that's not really an option since .NET web development jobs are rather rare in the region where I live...

    Read the article

  • Google Drive SDK: Publishing your website on Google Drive

    Google Drive SDK: Publishing your website on Google Drive In this session we'll show you a new feature of the Google Drive SDK: the ability to publish a website from a Google Drive folder. We'll show you a brief demo involving cute animals, but don't worry, no kittens will be harmed in the process! From: GoogleDevelopers Views: 0 3 ratings Time: 03:30:00 More in Science & Technology

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • Reduce Repetitive Initialization Code in C++ Applications by Using Delegating Constructors

    You're often required to repeat identical pieces of initialization code in every constructor of a class that declares multiple constructors. That's because unlike a few other programming languages, The C++ programming language doesn't allow a constructor to call another constructor of the same class. Luckily, this problem is about to disappear with the recent approval of a new C++0x feature called delegating constructors which are explained in this C++ tutorial.

    Read the article

  • What is the technical reason that so many social media sites don't allow you to edit your text?

    - by Edward Tanguay
    A common complaint I hear about Facebook, Twitter, Ning and other social sites is that once a comment or post is made, it can't be edited. I think this goes against one of the key goals of user experience: giving the user agency, or the ability to control what he does in the software. Even on Stackexchange sites, you can only edit the comments for a certain amount of time. Is the inability for so many web apps to not allow users to edit their writing a technical shortcoming or a "feature by design"?

    Read the article

  • 2D game editor with SDK or open format (Windows)

    - by Edward83
    I need 2d editor (Windows) for game like rpg. Mostly important features for me: Load tiles as classes with attributes, for example "tile1 with coordinates [25,30] is object of class FlyingMonster with speed=1.0f"; Export map to my own format (SDK) or open format which I can convert to my own; As good extension feature will be multi-tile brush. I wanna to choose one or many tiles into one brush and spread it on canvas.

    Read the article

  • serving static file from cookieless domain: alternative cookieless directory

    - by Simone Nigro
    I'm trying to follow all the guidelines of "Google Page Speed??". The directive "Minimize request overhead" requires static content (images, js, css, etc.) on a static server (ie cookieless): https://developers.google.com/speed/docs/best-practices/request I do not want to buy a new server and I was thinking of just setting a directory of my site without cookie with htaccess www.mysite.com/static/.htaccess Header unset Cookie Header unset Set-Cookie I do not know if it can be problematic. Looking on google it seems that no one ever has adopted this type of solution, so I think that it is incorrect. What do you think? alternatively you could do www.mysite.com/.htaccess <FilesMatch "\.(css|js|jpg|png|gif)$"> Header unset Cookie Header unset Set-Cookie </FilesMatch>

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >