Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 121/659 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • What is the best appliance you've used?

    - by phuzion
    Post your favorite appliances or "all-in-one" programs. Whether it runs in a virtual machine, or on its own hardware, it all goes. My submission is Untangle. It's an open source network gateway (their term). Essentially, it can run a plethora of things that you may otherwise end up buying another appliance for: Web filtering logging mail spam filtering phishing monitor spyware blocking VPN You name it, it's all there. Best of all, it's mostly free. A few appliances have annual costs due to inherent licensing or subscription costs. If you are looking for a new network perimeter device, definitely check it out. The underlying OS doesn't matter, because it's the application we want to praise, not the OS beneath it.

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Ubuntu 12.04 taking too much time to boot

    - by adarshdinesh
    Ubuntu 12.04 is taking much time for booting, Here is the system kernel message while booting .It is showing that some anacron was killed ,why ? and how to fix the problem ? [ 2.241047] scsi6 : usb-storage 2-1.6:1.0 [ 2.241501] usbcore: registered new interface driver usb-storage [ 2.241895] USB Mass Storage support registered. [ 3.240670] scsi 6:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 3.241791] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.243083] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 12.568641] Adding 4037904k swap on /dev/sda3. Priority:-1 extents:1 across:4037904k [ 12.615014] udevd[462]: starting version 175 [ 12.651334] mei: module is from the staging directory, the quality is unknown, you have been warned. [ 12.655283] [drm] Initialized drm 1.1.0 20060810 ................... [ 14.118369] init: alsa-restore main process (982) terminated with status 19 [ 14.252595] init: anacron main process (1033) killed by TERM signal [ 14.285763] HDMI status: Codec=3 Pin=5 Presence_Detect=0 ELD_Valid=0 [ 14.285841] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 14.285925] input: HDA Intel PCH Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 14.285991] input: HDA Intel PCH Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 14.615073] init: plymouth-stop pre-start process (1222) terminated with status 1 [ 16.447287] wlan0: authenticate with c0:8a:de:7c:60:e8 (try 1) [ 16.448858] wlan0: authenticated [ 16.453405] wlan0: associate with c0:8a:de:7c:60:e8 (try 1) [ 16.456392] wlan0: RX AssocResp from c0:8a:de:7c:60:e8 (capab=0x431 status=0 aid=2) [ 16.456398] wlan0: associated [ 16.457014] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 16.457017] ieee80211 phy0: brcmsmac: brcms_ops_bss_info_changed: associated [ 16.457019] ieee80211 phy0: changing basic rates failed: -22 [ 16.457021] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 16.457226] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 16.654196] ieee80211 phy0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 17.823565] ieee80211 phy0: wl0: brcms_c_d11hdrs_mac80211: txop exceeded phylen 180/256 dur 1946/1504 [ 18.220865] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 26.881422] wlan0: no IPv6 routers present [ 68.228293] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 73.240133] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 76.574490] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 102.180006] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 103.100984] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 124.171624] ieee80211 phy0: brcms_ops_bss_info_changed: qos enabled: true (implement)

    Read the article

  • Top 10 solution documents for Weblogic Server J2EE Feb 2014 - May 2014

    - by jhpierce -Oracle
    The following are the top 10 documents linked to SRs as solutions, for Weblogic Server J2EE issues, from Feb 2014 thru May 2014. 1163020.1 How to configure Filtering class loader in weblogic.xml   To configure the Filtering Class Loader to specify a certain package is loaded from an application, add a prefer-application-packages descriptor element. 1276593.1 WLS - How to supress servlet/JSP version details In WebLogic HTTP response header The string "X-Powered-By: Servlet/2.4 JSP/2.0" is showing up in the servlet response header.How to stop Weblogic from including servlet/JSP version details in the x-powered-by HTTP response header. 1490080.1 WebLogic Server 12.1.1.0 in a Cluster Environment Throws NotSerializableException for CDI Applications at com.sun.jersey.server.impl.cdi.CDIExtension When running in clustered environment, server start-up is not clean when you have CDI applications deployed. 1268138.1 Sample TwoWay SSL implementation for JAX-WS Webservice!   In this sample provided the recipient checks for the initiator's public certificate. Note that the client certificate can be used for authentication. 1584779.1 Socket Leaks When Calling Web-Service Over SSL This is a known bug 16810786 1598617.1 Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in.    1056121.1 How to Timeout Weblogic Webservice Client   How to timeout a WebService client with and without using Stubs. 1568638.1 When packaging Jersey JAX-RS libraries into webapp throws NoSuchMethodError()  When attempting to include custom Jersey implementation libraries in to web application in a OSB domain. 1118264.1 WLS 10.3: Intermittent XA error: XAResource.XAER_RMERR In WebLogic 10.3, a CMP EJB sometimes throws the exception.   1608951.1 How to get More Details About Error BEA-101215 Malformed Request. Request parsing failed Code: -1   Which was seen when accessing the application via loadbalancer?

    Read the article

  • Free 48 Page PDF Guide to Mastering Social Networking with Gwibber [Linux]

    - by Asian Angel
    Are you using Gwibber on your Linux system but not making full use of its’ potential? This free 48 page PDF guide will show you how to use and tweak Gwibber for the best performance when it comes to working with your social networks. Photo courtesy of Gwibber Blog. Examples of sections included in the guide are: Installing and getting started with Gwibber Becoming familiar with Gwibber’s UI Broadcasting and interacting on your social networks through Gwibber Filtering the flow of information Customizing the interface And more Here is an excerpt from the section on Filtering the Flow of Information. The step by step instructions combined with helpful, labeled screenshots make this a nice guide to have for anyone wanting to get the most out of Gwibber for their social networking needs. Note: Gwibber works with Twitter, identi.ca, StatusNet, Facebook, FriendFeed, Digg, Flickr, and Qaiku. Download the Master Social Networking with Gwibber PDF Guide [via OMG! Ubuntu!] *Note: In this instance this is a direct download of the PDF Guide itself. Visit the Gwibber Homepage HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Dlink DWA-643 ExpressCard / Atheros AR5008 can't connect to wifi networks

    - by Justin Kelly
    I've just purchased a D-Link DWA-643 Xtreme N ExpressCard Notebook Adapter - but it can't connect to my wireless network The card is listed on the FSF website and - refer links below: http://www.fsf.org/resources/hw/index_html/net/wireless/index_html/cards.html http://www.dlink.com.au/products/?pid=550 Ubuntu see the card as using the Atheros AR5008 chipset - refer image below The card lights up and I can see that available wifi networks using this card - so it seems to 'just work' on ubuntu 12.04 but when i try and connect to my networks - it fails I've tried setting the network to all the different options (WEP, WPA2, no encryption, etc.. b/g/n ) but ubuntu sill cant connect to it I've also installed wicd but still couldn't connect Has anyone got a DWA-643 to work in Ubuntu? Or does anyone have any suggestion on how to get it to connect?? Any help would be greatly appreciated Note: the laptop has built in wifi but its broadcom, works but with dialup speed connection - and i've had nothign but trouble using the boardcom drivers so purchased the FSF recommended PCI expresscard as i hoped it would 'just work' on the latest Ubuntu i've have tried to disable the built in wifi - broadcom - but even with the broadcom uninstall and unavailable it didnt help the dlink to connect previously I had MAC address filtering on the router - i've added the dlinks MAC - and also disabled MAC address filtering - still no luck lspci output below: 18:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: D-Link System Inc Device 3a6f Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at e4000000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • KnpLabs / DoctrineBehaviors Translatable - currentLocale = null

    - by Ruben
    Using the trait \Knp\DoctrineBehaviors\Model\Translatable\Translation inside an Entity, I see that the property currentLocale is never setted , so we always obtain the default locale ('en') in all the calls to $this->translate(). If I change this getDefaultLocale, all the translations are made correctly, so I think that is no problem with the fallback. I tried debug the currentLocaleCallable. I see that if I put a "var_dump ($this-container-get('request'));" in the contructor of currentLocaleCallable, the request have a locale to null. And outside the request has the correct locale.It seems that container is not in the scope: request , i don't know how can I get it to work I post an issue in github https://github.com/KnpLabs/DoctrineBehaviors/issues/71 EDITED This service is defined in vendor/knplabs/doctrine-behaviors/config/orm-services.yml and is: knp.doctrine_behaviors.reflection.class_analyzer: class: "%knp.doctrine_behaviors.reflection.class_analyzer.class%" public: false knp.doctrine_behaviors.translatable_listener: class: "%knp.doctrine_behaviors.translatable_listener.class%" public: false arguments: - "@knp.doctrine_behaviors.reflection.class_analyzer" - "%knp.doctrine_behaviors.reflection.is_recursive%" - "@knp.doctrine_behaviors.translatable_listener.current_locale_callable" tags: - { name: doctrine.event_subscriber } knp.doctrine_behaviors.translatable_listener.current_locale_callable: class: "%knp.doctrine_behaviors.translatable_listener.current_locale_callable.class%" arguments: - "@service_container" # lazy request resolution public: false EDIT 2: My composer.json "php": ">=5.3.3", "symfony/symfony": "2.3.*", "doctrine/orm": ">=2.2.3,<2.4-dev", "doctrine/doctrine-bundle": "1.2.*", "twig/extensions": "1.0.*", "symfony/assetic-bundle": "2.3.*", "symfony/swiftmailer-bundle": "2.3.*", "symfony/monolog-bundle": "2.3.*", "sensio/distribution-bundle": "2.3.*", "sensio/framework-extra-bundle": "2.3.*", "sensio/generator-bundle": "2.3.*", "incenteev/composer-parameter-handler": "~2.0", "friendsofsymfony/user-bundle": "1.3.*", "avalanche123/imagine-bundle": "v2.1", "raulfraile/ladybug-bundle": "~1.0", "genemu/form-bundle": "2.2.*", "friendsofsymfony/rest-bundle": "0.12.0", "stof/doctrine-extensions-bundle": "dev-master", "sonata-project/admin-bundle": "dev-master", "a2lix/translation-form-bundle": "1.*@dev", "sonata-project/user-bundle": "dev-master", "psliwa/pdf-bundle": "dev-master", "jms/serializer-bundle": "dev-master", "jms/di-extra-bundle": "dev-master", "knplabs/doctrine-behaviors": "dev-master", "sonata-project/doctrine-orm-admin-bundle": "dev-master", "knplabs/knp-paginator-bundle": "dev-master", "friendsofsymfony/jsrouting-bundle": "~1.1", "zendframework/zend-validator": ">=2.0.0-rc2", "zendframework/zend-barcode": ">=2.0.0-rc2"

    Read the article

  • DataContractJsonSerializer ReadObject Exception

    - by Dan Appleyard
    I am following the accepted answer of ASP.NET MVC How to pass JSON object from View to Controller as Parameter. Like the original question, I have a simple POCO. Everthing works fine for me up until the DataContractJsonSerializer.ReadObject method. I am getting the following exception: Expecting element 'root' from namespace ''.. Encountered 'None' with name '', namespace ''. Public Overrides Sub OnActionExecuting(ByVal filterContext As ActionExecutingContext) If filterContext.HttpContext.Request.ContentType.Contains("application/json") Then Dim s As System.IO.Stream = filterContext.HttpContext.Request.InputStream Dim o = New DataContractJsonSerializer(RootType).ReadObject(s) filterContext.ActionParameters(Param) = o Else Dim xmlRoot = XElement.Load(New StreamReader(filterContext.HttpContext.Request.InputStream, filterContext.HttpContext.Request.ContentEncoding)) Dim o As Object = New XmlSerializer(RootType).Deserialize(xmlRoot.CreateReader) filterContext.ActionParameters(Param) = o End If End Sub Any ideas? Thanks

    Read the article

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's the correct way to POST a compressed JSON string with RestSharp?

    - by Steve Dunn
    I want to use RestSharp to POST something somewhere. I'm posting straight JSON (and not POCOs). Because I'm posting plain JSON, I believe I need to use this workaround instead of setting Body: request.AddParameter( "application/json", myJsonString, ParameterType.RequestBody); This works fine when I'm not compressing the JSON. When I do, using this: request.Headers.Add("Content-Encoding", "gzip"); request.AddParameter( "application/json", GZipStream.CompressString(myJsonString), ParameterType.RequestBody); This doesn't work. I stepped through the code and in RestClient::ConfigureHttp, I see: http.RequestBody = body.Value.ToString(); Since I'm giving at a byte array, body.Value is set to System.Byte[] Is there a way for RestSharp to handle a gzipped json string in a POST request?

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • HTTP client - HTTP 405 error "Method not allowed". I send a HTTP Post but for some reason HTTP Get i

    - by Shino88
    Hey I am using apache library. I have created a class which sends a post request to a servlet. I have set up the parameters for the client and i have created a HTTP post object to be sent but for some reason when i excute the request i get a reposnse that says the get method is not supported(which is true cause i have only made a dopost method in my servlet). It seems that a get request is being sent but i dont know why. The post method worked before but i started gettng http error 417 "Expectation Failed" which i fixed by adding paramenters. below is my class with the post method. P.s i am developing for android. public class HTTPrequestHelper { private final ResponseHandler<String> responseHandler; private static final String CLASSTAG = HTTPrequestHelper.class.getSimpleName(); private static final DefaultHttpClient client; static{ HttpParams params = new BasicHttpParams(); params.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); params.setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, HTTP.UTF_8); ///params.setParameter(CoreProtocolPNames.USER_AGENT, "Android-x"); params.setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, 15000); params.setParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK, false); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register( new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); schemeRegistry.register( new Scheme("https", SSLSocketFactory.getSocketFactory(), 443)); ThreadSafeClientConnManager cm = new ThreadSafeClientConnManager(params, schemeRegistry); client = new DefaultHttpClient(cm,params); } public HTTPrequestHelper(ResponseHandler<String> responseHandler) { this.responseHandler = responseHandler; } public void performrequest(String url, String para) { HttpPost post = new HttpPost(url); StringEntity parameters; try { parameters = new StringEntity(para); post.setEntity(parameters); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } BasicHttpResponse errorResponse = new BasicHttpResponse( new ProtocolVersion("HTTP_ERROR", 1, 1), 500, "ERROR"); try { client.execute(post, this.responseHandler); } catch (Exception e) { errorResponse.setReasonPhrase(e.getMessage()); try { this.responseHandler.handleResponse(errorResponse); } catch (Exception ex) { Log.e( "ouch", "!!! IOException " + ex.getMessage() ); } } } I tried added the allow header to the request but that did not work as well but im not sure if i was doing right. below is the code. client.addRequestInterceptor(new HttpRequestInterceptor() { @Override public void process(HttpRequest request, HttpContext context) throws HttpException, IOException { //request.addHeader("Allow", "POST"); } });

    Read the article

  • PYTHON: ntlm authentication

    - by Svetlana
    Hello!! I'm trying to implement NTLM authentication on IIS (Windows Server 2003) from Windows 7 with python. LAN Manager Authentication Level: Send NTLM response only. Client machine and server are in the same domain. Domain controller (AD) is on another server (also running Windows Server 2003). I recieve 401.1 - Unauthorized: Access is denied due to invalid credentials. Could you please help me find out what is wrong with this code and/or show me the other possible directions to solve this problem (using NTLM or Kerberos)? [python] import sys, httplib, base64, string import urllib2 import win32api import sspi import pywintypes import socket class WindoewNtlmMessageGenerator: def __init__(self,user=None): import win32api,sspi if not user: user = win32api.GetUserName() self.sspi_client = sspi.ClientAuth("NTLM",user) def create_auth_req(self): import pywintypes output_buffer = None error_msg = None try: error_msg, output_buffer = self.sspi_client.authorize(None) except pywintypes.error: return None auth_req = output_buffer[0].Buffer auth_req = base64.encodestring(auth_req) auth_req = string.replace(auth_req,'\012','') return auth_req def create_challenge_response(self,challenge): import pywintypes output_buffer = None input_buffer = challenge error_msg = None try: error_msg, output_buffer = self.sspi_client.authorize(input_buffer) except pywintypes.error: return None response_msg = output_buffer[0].Buffer response_msg = base64.encodestring(response_msg) response_msg = string.replace(response_msg,'\012','') return response_msg fname='request.xml' request = file(fname).read() ip_host = '10.0.3.112' ntlm_gen = WindoewNtlmMessageGenerator() auth_req_msg = ntlm_gen.create_auth_req() auth_req_msg_dec = base64.decodestring(auth_req_msg) auth_req_msg = string.replace(auth_req_msg,'\012','') webservice = httplib.HTTPConnection(ip_host) webservice.putrequest("POST", "/idc/idcplg") webservice.putheader("Content-length", "%d" % len(request)) webservice.putheader('Authorization', 'NTLM'+' '+auth_req_msg) webservice.endheaders() resp = webservice.getresponse() resp.read() challenge = resp.msg.get('WWW-Authenticate') challenge_dec = base64.decodestring(challenge.split()[1]) msg3 = ntlm_gen.create_challenge_response(challenge_dec) webservice = httplib.HTTP(ip_host) webservice.putrequest("POST", "/idc/idcplg?IdcService=LOGIN&Auth=Intranet") webservice.putheader("Host", SHOD) webservice.putheader("Content-length", "%d" % len(request)) webservice.putheader('Authorization', 'NTLM'+' '+msg3) webservice.putheader("Content-type", "text/xml; charset=\"UTF-8\"") webservice.putheader("SOAPAction", "\"\"") webservice.endheaders() webservice.send(request) statuscode, statusmessage, header = webservice.getreply() res = webservice.getfile().read() res_file = file('result.txt','wb') res_file.write(res) res_file.close() [/python] sspi.py is available here: http://www.koders.com/python/fidF3B0061A07CD13BA35FF263E3E45252CFABFAA3B.aspx?s=timer Thanks!

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >