Search Results

Search found 11306 results on 453 pages for 'methods'.

Page 330/453 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • Running computer from separate room

    - by Dan
    I want my computer to be in the basement, but to use it on the first floor. Which cables should I run through the floor? Can some be wireless or other methods? Here are some of the options I've thought of: Basic: Run DVI, usb (mouse), usb (keyboard), and audio cable (4 cables) USB Hub option: Run DVI, and 1 usb, then using a usb hub split it into mouse, keyboard, maybe even audio (2-3 cables) HDMI Option: If I get a new video card and monitor that supports HDMI, would I be able to run both audio and video through it? Would the monitor have to have an audio out? Also there is a lot of extra bandwidth in the HDMI cables, could I send two monitors on 1 cable or would I have to use 2 cables? How about sending mouse/keyboard through the HDMI cable? I see a lot of monitors with USB hubs built in, but I assume I'd still have to wire HDMI + 1 USB cable to use the USB hubs? X Terminal Machine/Thin Client: I don't really know much about this option. Not sure if it would allow me to run graphics acceleration and watch movies, does anyone know more details about what this would allow me to do? Other options: Any other ways to do this? Can any of this be wireless?

    Read the article

  • Updating ASUS BIOS on Windows 7 64bit

    - by joesavage
    I've recently decided that I want to try and utilize my two graphics cards so that I can have a dual monitor setup. Unfortunately Windows only seems to notice my most recent graphics card installation - and so I've been told that I should look into my BIOS and try to enable two graphics cards. I could not find this setting anywhere in my M2N68-AM Plus v0210 BIOS. After some further research I figured that I should perhaps upgrade my BIOS, so I searched and managed to download the latest version (v1804) as a ROM file. However I am having difficulty figuring out how to install it. I've tried using the Asus EZFlash feature built into my BIOS, but when trying to load up a variety of different ROMs that are for my motherboard/BIOS I get the error: Boot block in file is not valid! I'm not totally sure what I should do to fix this, so I'm looking into other methods of upgrading my BIOS - however I can't really find any solutions that seem to work. Asus Update is for 32-bit only, AFUDOS doesn't appear to work on my Windows 7 64-bit system (I think it's supposed to run in DOS or something - but that just sounds confusing since I know nothing about DOS). Could anybody help me with this?

    Read the article

  • Can I nest a command string within another command string?

    - by Zach L
    Whenever I run the following command in an elevated command prompt, I get the 0x80070005 Access Denied error code. I'm assuming it's a permissions error for the child shell. I'm running the command in an elevated prompt on Winddows 7 Pro SP1. FORFILES /P %WINDIR%\servicing\Packages /M Microsoft-Windows-InternetExplorer-* 9.*.mum /c "cmd /c echo Uninstalling package @fname && start /w pkgmgr /up:@fname /norestart" Can place the "Runas" command within the already nested command in order to run the child shell as an admin? I don't think I can because of conflicts with quotation mark locations. If there's another way to do this, such as via a batch file, I'm open to alternative methods, although I do prefer running it as a single string. Sidenote1: Ignore the space after the first asterisk in the command string. It was added one for aesthetics & accuracy. Sub-question: Could I use this "fix" to circumnavigate the problem entirely? Prompt as Administrator? Reference for Runas #1 Reference for Runas #2

    Read the article

  • Must have local user to authenticate Samba to AD?

    - by Phil
    I've got a CentOS 5.3 server with Samba running. I've joined this server to my domain in the hopes of allowing AD users some access to my Samba shares. I've found that this works, but only as long as the AD username that I'm trying to authenticate with is also a local user on the server. In other words, if I'm trying to access a share, and try to authenticate with the AD username "joe", I get errors unless I create a user named 'joe' on the server. I don't have to create a matching password or anything....the local user's password is always blank, so I do know that the authentication is actually happening against the AD. Here's my smb.conf file: [global] workgroup = <mydomain> server string = <snip> netbios name = HOME security = ADS realm = <mydomain.com> password server = <snip> auth methods = winbind log level = 1 log file = /var/log/samba/%m.log [amore] path = /var/www/amore browseable = yes writable = yes valid users = DOMAIN\user1 DOMAIN\user2 DOMAIN\user3 DOMAIN\user4 I would assume that my kerberos settings are fine, as I've joined the domain and can use wbinfo to see users and groups. However, I can provide that info if necessary. Anyone have any ideas?

    Read the article

  • New AD user request form and workflow

    - by user66390
    I'm wondering if anyone is providing a solid solution for creating New Network User Account Request forms, and attaching workflows to them to automate account creation? I'm currently investigating a number of options, but am surprised that such a ubiquitous task hasn't been solved a dozen times over and thoroughly documented. Or at least isn't integrated into current off-the-shelf change management and ticketing systems. Ideally, I'd like for our current ticketing system, ServiceDesk+ to present a standard 'New User' form to department heads, which they can fill in with the required new user details. This triggers a workflow that submits the request as a ticket that can be reviewed and actioned. Actioning the ticket triggers a workflow that creates a user in AD with the details provided, and notifies the department head upon completion. All told, a pretty standard requirement that I'm sure most organizations have. What are other people doing to accomplish this? Edit: I should add, I'm more looking for "supported" methods. As is, I've submitted a number of scripted solutions, none of which have met with manager approval.

    Read the article

  • Simulate a DFS share for a user not on domain with a folder in path

    - by user223655
    I have a consultant whose computer is not on the domain and needs to access various network resources. Unfortunately while adding a computer to the domain is a difficult bureaucratic process (and would disallow much of his development software from even running given the domain restrictions), we can allow him to have credentials to access network resources. As such, he accesses various network resources via NET USE etc. without using DFS. There is one piece of software which requires him to have the same hardcoded path as other domain users but that path is a DFS path which he can't map (i.e., the software checks the path at runtime and will only run if it matches the registered path and will reject it in the context of using a DFS versus conventional machine path) I was wondering if there's some method to simulate the DFS path without actually using DFS. e.g., the path the software needs to see is "\ABC\DFS\software\app.exe" whereas the non DFS path is "\DEF\Software\app.exe" while I could make his hosts file point DEF to ABC, I'm not sure if I can somehow make it point there with the DFS "folder" as well are there any methods for this short of making changes to the AD to allow him to use DFS or add him to the domain (both of which are politically/technically challenging sadly)? Thanks guys

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • How to short-circuit Eclipse AutoComplete when "by relevance" Sorting is enabled?

    - by JRtim
    I know there's a number of autocomplete articles scattered around stack and eclipse relevant communities. I haven't seen a solution that I have been able to make work. Essentially it goes like this... I like sorting by relevance (since it usually has a decent priority), but sometimes I want to short circuit the autocomplete without having to hit x 2. If I have a class that has a couple methods like, getThis(), getThat, and get(), and eclipse picks up that I usually use getThis(), but every once in a while I want the get() method. So I type "Class" "period" "get(" -- whoops eclipse just filled in getThis(). When the sorting is alphabetically, this isn't an issue, but "by relevance" is a nice default. Just wondering if I'm missing the boat on something. Obviously one solution is to increase the latency on the autocomplete popup, but get() might just be an example. There might a situation where the method I want is a few parameters and maybe more than 3 keystrokes. Thank you.

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • How to control/check CheckPoint rules changes (and another System events)

    - by user35115
    I need to check/control all system events on many CheckPoint FW1 - don't misunderstand - not rules triggering, but events such admins log on, rules changes and etc. I found out that I can make an log export using 2 methods: Grab logs Use special script that redirect Checkpoint log entries to syslog, FW1-Loggrabber But it's not clear for me does such logs also contain information that i need (admins log on, rules changes)? And If yes is it possible to filter events? I also suppose, that if system bases on *nix platform it must be a ploy - use based functions of the system to do what i want. Unfortunately i don't know where to "dig". May be you know? Updated: New info "FW-1 can pipe its logs to syslog via Unix's logger command, and there are third party log-reading utilities" So, the main question is how do my task in the best way? Has anybody already resolved such problem? P.S. I' m new with CheckPoint, so all information will be useful for me. Thank you.

    Read the article

  • How to prevent the command prompt from closing after execution?

    - by Sk8erPeter
    My problem is that in Windows, there are command line windows that close immediately after execution. To solve this, I want the default behavior to be that the window is kept open. Normally, this behavior can be avoided with three methods that come to my mind: Putting a pause line after batch programs to prompt the user to press a key before exiting Running these batch files or other command line manipulating tools (even service starting, restarting, etc. with net start xy or anything similar) within cmd.exe(Start - Run - cmd.exe) Running these programs with cmd /k like this: cmd /k myprogram.bat But there are some other cases in which the user: Runs the program the first time and doesn't know that the given program will run in Command Prompt (Windows Command Processor) e.g. when running a shortcut from Start menu (or from somewhere else), OR Finds it a little bit uncomfortable to run cmd.exe all the time and doesn't have the time/opportunity to rewrite the code of these commands everywhere to put a pause after them or avoid exiting explicitly. I've read an article about changing default behavior of cmd.exe when opening it explicitly, with creating an AutoRun entry and manipulating its content in these locations: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor\AutoRun HKEY_CURRENT_USER\SOFTWARE\Microsoft\Command Processor\AutoRun (The AutoRun items are _String values_...) I put cmd /d /k as a value of it to give it a try, but this didn't change the behaviour of the stuffs mentioned above at all... It just changed the behaviour of the command line window when opening it explicitly (Start-Run-cmd.exe). So how does it work? Can you give me any ideas to solve this problem?

    Read the article

  • iptables port redirection on Ubuntu

    - by Xi.
    I have an apache server running on 8100. When open http://localhost:8100 in browser we will see the site running correctly. Now I would like to direct all request on 80 to 8100 so that the site can be accessed without the port number. I am not familiar with iptables so I searched for solutions online. This is one of the methods that I have tried: user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 8100 -j ACCEPT user@ubuntu:~$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8100 It's not working. The site works on 8100 but it's not on 80. If print out the rules using "iptables -t nat -L -n -v", this is what I see: user@ubuntu:~$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8100 Chain INPUT (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination The OS is a Ubuntu on a VMware. I thought this should be a simple task but I have been working on it for hours without success. :( What am I missing?

    Read the article

  • Linux that restores itself on each reboot

    - by jettero
    I'm looking for methods and software to help create a variant of lubuntu that will restore itself to an install state and/or update on every boot. I'm thinking of doing things like putting the root filesystem on a squashfs and using unionfs and tmpfs to make root writable, but automagically restorable. I'm thinking of updating the squashfs with rsync. Perhaps there are other ways to approach the problem. Perhaps root needn't be writable at all. All thoughts welcome. The home dir would be writable in the usual way. The goal, if it matters, is a Linux that's simple to maintain from the home office, but that functions correctly for customers. We have some custom software that we wish for customers to be able to run trivially on equipment we provide. Ideally these devices would have a "restore to factory" function that would put it back the way we intended. If this is part of the normal boot cycle, so much the better. Why lubuntu? Personal preference for this application. It has a usable desktop, but doesn't take up much ram.

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • Debian on HP ProLiant server hangs (disk i/o is my guess)

    - by Martin
    I installed Debian (2.6.32-5-amd64) on my HP ProLiant MicroServer (purchased recently.) I also added 3 2tb hd in zfs. I've experienced several server froze. Sometimes it showed Soft lockup CUP stuck for 61s! Today I experienced a different problem (I think) and the message looked like this [431336.200002] Call Trace: [431336.200002] [<ffffffff812fcc7c>] ? _write_lock+0xe/0xf [431336.200002] [<ffffffff810d7a86>] ? __vmalloc_node+0x99/0xe2 : : and (in different screen) [431354.222318] Node 0 DMA32 free: 2064kB min:5520kB low:69900kB high:8280kB active_anon:181648kB inactive_anon:61728kB active_file:313152kB inactive_file:832456kB unevictable: 0kB isolated(anon): 0kB isolated(file):0kB present:1922596kB mlocked:0kB dirty:72kB writeback:0kB mapped:25620kB shmem:344kB slab_reclaimable:34460kB slab_unreclaimable:31400kB kernel_stack:2288kB pagetables:7556kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [431354.222431] lowmem_reserve[]: 0 0 0 0 : : Is this a hardware problem? What tools/methods can I find out the source of the problem? I've used Debian for years but never had problem like this.

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Java process eating CPU; Why?

    - by Camran
    I have a Linux server which I have installed Java on. Sometimes, and only sometimes when a large nr of visitors visit my website, the site hangs. When I open the terminal and enter the "top" command to see whats going on, I can see that "Java" process is eating CPU! Like 400%. I have also tried ps aux command, and can see that the command is from usr/bin/java I have little experience in troubleshooting this kind of things, so I turn to you guys for help. I have a java container installed (Jetty) which I must have in order to use SOLR (search engine) which is integrated into my website. I can start and stop SOLR by: etc/init.d/solr stop But this didn't remove the java process from the "Top" command. Still java was eating 400% CPU. Is there other methods to restart java only? This has happened twice to me, and each time I have now restarted my entire servers and everithing is fine. If you need more input let me know! Thanks

    Read the article

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • HTTP Builder/Groovy - lost 302 (redirect) handling?

    - by Misha Koshelev
    Dear All: I am reading here http://groovy.codehaus.org/modules/http-builder/doc/handlers.html "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses." This seems to work fine when I simply use the get() or post() methods without a closure. However, when I use a closure, I seem to lose 302 handling. Is there some way I can handle this myself? Thank you p.s. Here is my log output showing it is a 302 response [java] FINER: resp.statusLine: "HTTP/1.1 302 Found" Here is the relevant code: // Copyright (C) 2010 Misha Koshelev. All Rights Reserved. package com.mksoft.fbbday.main import groovyx.net.http.ContentType import java.util.logging.Level import java.util.logging.Logger class HTTPBuilder { def dataDirectory HTTPBuilder(dataDirectory) { this.dataDirectory=dataDirectory } // Main logic def logger=Logger.getLogger(this.class.name) def closure={resp,reader-> logger.finer("resp.statusLine: \"${resp.statusLine}\"") if (logger.isLoggable(Level.FINEST)) { def respHeadersString='Headers:'; resp.headers.each() { header->respHeadersString+="\n\t${header.name}=\"${header.value}\"" } logger.finest(respHeadersString) } def text=reader.text def lastHtml=new File("${dataDirectory}${File.separator}last.html") if (lastHtml.exists()) { lastHtml.delete() } lastHtml<<text new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) } def processArgs(args) { if (logger.isLoggable(Level.FINER)) { def argsString='Args:'; args.each() { arg->argsString+="\n\t${arg.key}=\"${arg.value}\"" } logger.finer(argsString) } args.contentType=groovyx.net.http.ContentType.TEXT args } // HTTPBuilder methods def httpBuilder=new groovyx.net.http.HTTPBuilder () def get(args) { httpBuilder.get(processArgs(args),closure) } def post(args) { args.contentType=groovyx.net.http.ContentType.TEXT httpBuilder.post(processArgs(args),closure) } } Here is a specific tester: #!/usr/bin/env groovy import groovyx.net.http.HTTPBuilder import groovyx.net.http.Method import static groovyx.net.http.ContentType.URLENC import java.util.logging.ConsoleHandler import java.util.logging.Level import java.util.logging.Logger // MUST ENTER VALID FACEBOOK EMAIL AND PASSWORD BELOW !!! def email='' def pass='' // Remove default loggers def logger=Logger.getLogger('') def handlers=logger.handlers handlers.each() { handler->logger.removeHandler(handler) } // Log ALL to Console logger.setLevel Level.ALL def consoleHandler=new ConsoleHandler() consoleHandler.setLevel Level.ALL logger.addHandler(consoleHandler) // Facebook - need to get main page to capture cookies def http = new HTTPBuilder() http.get(uri:'http://www.facebook.com') // Login def html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) assert html==null // Why null? html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) { resp,reader-> assert resp.statusLine.statusCode==302 // Shouldn't we be redirected??? // http://groovy.codehaus.org/modules/http-builder/doc/handlers.html // "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses. " } Here are relevant logs: FINE: Receiving response: HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Cache-Control: private, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Expires: Sat, 01 Jan 2000 00:00:00 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Location: http://www.facebook.com/home.php? Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << P3P: CP="DSP LAW" Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Pragma: no-cache Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: datr=1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79; expires=Sun, 03-Jun-2012 21:37:24 GMT; path=/; domain=.facebook.com Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxe=koshelev%40post.harvard.edu; expires=Tue, 28-Sep-2010 15:24:04 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxr=deleted; expires=Thu, 04-Jun-2009 21:37:23 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: pk=183883c0a9afab1608e95d59164cc7dd; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Type: text/html; charset=utf-8 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << X-Cnection: close Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Date: Fri, 04 Jun 2010 21:37:24 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Length: 0 Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: datr][value: 1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79][domain: .facebook.com][path: /][expiry: Sun Jun 03 16:37:24 CDT 2012]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxe][value: koshelev%40post.harvard.edu][domain: .facebook.com][path: /][expiry: Tue Sep 28 10:24:04 CDT 2010]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxr][value: deleted][domain: .facebook.com][path: /][expiry: Thu Jun 04 16:37:23 CDT 2009]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: pk][value: 183883c0a9afab1608e95d59164cc7dd][domain: .facebook.com][path: /][expiry: null]". Jun 4, 2010 4:37:22 PM org.apache.http.impl.client.DefaultRequestDirector execute FINE: Connection can be kept alive indefinitely Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINE: Response code: 302; found handler: post302$_run_closure2@7023d08b Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINEST: response handler result: null Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.SingleClientConnManager releaseConnection FINE: Releasing connection org.apache.http.impl.conn.SingleClientConnManager$ConnAdapter@605b28c9 You can see there is clearly a location argument. Thank you Misha

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >