Search Results

Search found 5578 results on 224 pages for 'transport rules'.

Page 49/224 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • SQL Server 2008 R2 Enterprise won't install on Windows 2008 R2 Enterprise

    - by Carlos Paulino
    I've been trying to install SQL Server on a new Windows Server 2008. I have tried everything but I haven't been able to narrow down the problem. When the installation fails I get " Exit code (Decimal): -2068643839". The problem with this is that according to Microsoft this is a generic error code. I follow their guide to look into the detail.txt inside C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\ But I can't find something that specifies the exact error. Any suggestions ? Thanks in advanced. I uploaded to detail.txt to http://www.megaupload.com/?d=0MV46SZH because it is to big to paste here. Below is the summary.txt ---------- Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2011-02-28 11:29:56 End time: 2011-02-28 11:34:45 Requested action: Install Machine Properties: Machine name: SA-SERVER Machine processor count: 8 OS version: Windows Server 2008 R2 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: F:\x64\setup\ Installation edition: ENTERPRISE User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\SYSTEM AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Manual ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: <empty> ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: <empty> ASSVCPASSWORD: ***** ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini CUSOURCE: ENABLERANU: False ENU: True ERRORREPORTING: False FARMACCOUNT: <empty> FARMADMINPORT: 0 FARMPASSWORD: ***** FEATURES: SQLENGINE,BIDS,CONN,IS,BC,SDK,SSMS,ADV_SSMS,SNAC_SDK,OCS FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: ***** HELP: False IACCEPTSQLSERVERLICENSETERMS: False INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: D:\SQLServer INSTANCEID: MSSQLSERVER INSTANCENAME: MSSQLSERVER ISSVCACCOUNT: NT AUTHORITY\SYSTEM ISSVCPASSWORD: ***** ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: ***** PCUSOURCE: PID: ***** QUIET: False QUIETSIMPLE: False ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: ***** RSSVCSTARTUPTYPE: Automatic SAPWD: ***** SECURITYMODE: SQL SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT AUTHORITY\SYSTEM SQLSVCPASSWORD: ***** SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SA-SERVER\Administrator SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: False TCPENABLED: 1 UIMODE: Normal X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Integration Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Connectivity Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Complete Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Backwards Compatibility Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Business Intelligence Development Studio Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Microsoft Sync Framework Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\SystemConfigurationCheck_Report.htm

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Axis Fault - axis (401)Unauthorized

    - by jani
    Hi all, I am trying to create a simple axis web service. I am using axis 1.2.1, JDK 6, Weblogic. Everything seems to be fine except invoking the web service. When I try to invoke the service it gives me an 'Unautherized' error. Any ideas of what am I doing wrong? Thanks in advance AxisFault faultCode: {http://xml.apache.org/axis/}HTTP faultSubcode: faultString: (401)Unauthorized faultActor: faultNode: faultDetail: {}:return code: 401 {http://xml.apache.org/axis/}HttpErrorCode:401 (401)Unauthorized at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:744) at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165) at org.apache.axis.client.Call.invokeEngine(Call.java:2765) at org.apache.axis.client.Call.invoke(Call.java:2748) at org.apache.axis.client.Call.invoke(Call.java:2424) at org.apache.axis.client.Call.invoke(Call.java:2347) at org.apache.axis.client.Call.invoke(Call.java:1804)

    Read the article

  • wcf - maximum array length quota

    - by dav.evans
    Im writing a small wcf/wpf app to resize images but wcf is giving me grief when I try to send an image of size 28K to my service from the client. The service works fine when I send it smaller images. I immediately assumed that this was a configuration issue and I've trawled the web looking at posts regarding the MaxArrayLength property in my binding configuration. Ive upped the limits on these settings on both the client and server to the maximum 2147483647 but still I get the following error: {"The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://mywebsite.com/services/servicecontracts/2009/01:OriginalImage. The InnerException message was 'There was an error deserializing the object of type System.Drawing.Image. The maximum array length quota (16384) has been exceeded while reading XML data. This quota may be increased by changing the MaxArrayLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'. Please see InnerException for more details."} Ive made my client and server configs the same and they look like the following: Server: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_ImageResizerServiceContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="10" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="LogoResizer.WCF.ServiceTypes.ImageResizerService" behaviorConfiguration="ServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:900/mex/"/> <add baseAddress="net.tcp://localhost:9000/" /> </baseAddresses> </host> <endpoint binding="netTcpBinding" contract="LogoResizer.WCF.ServiceContracts.IImageResizerService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel> and my client config looks like: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_ImageResizerServiceContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="10" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:9000/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_ImageResizerServiceContract" contract="ImageResizerService.ImageResizerServiceContract" name="NetTcpBinding_ImageResizerServiceContract"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> It seems no matter what I set these values to I still get an error saying wcf cannot serialize my file because its greater than 16384. Any ideas? edit: the email address in the userPrincipalName tag has been altered for my privacy

    Read the article

  • How can I use WCF with only basichttpbinding, SSL and Basic Authentication in IIS?

    - by Tim
    Hello, Is it possible to setup a WCF service with SSL and Basic Authentication in IIS using only BasicHttpBinding-binding? (I can’t use the wsHttpBinding-binding) The site is hosted on IIS 7, with the following authentication set up: - Anonymous access: off - Basic authentication: on - Integrated Windows authentication: off !! Service Config: <services> <service name="NameSpace.SomeService"> <host> <baseAddresses> <add baseAddress="https://hostname/SomeService/" /> </baseAddresses> </host> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" bindingNamespace="http://hostname/SomeMethodName/1" contract="NameSpace.ISomeInterfaceService" name="Default" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> <exceptionShielding/> </behavior> </serviceBehaviors> </behaviors> I tried 2 types of bindings with two different errors: 1 - IIS Error: 'Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. <bindings> <basicHttpBinding> <binding> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> 2 - IIS Error: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. <bindings> <basicHttpBinding> <binding> <security mode="Transport"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> Does somebody know how to configure this correctly? (if possible?)

    Read the article

  • Debugging Messaging Exception

    - by rizza
    We have a batch program that incorporates JavaMail 1.2 that sends emails. In our development environment, we haven't got the chance to encounter the above mentioned exception. But in the client's environment, they had experienced this a lot of times with the following error trace: javax.mail.MessagingException: 550 Requested action not taken: NUL characters are not allowed. at com.sun.mail.smtp.SMTPTransport.issueCommand (SMTPTransport.java: 879) at com.sun.mail.smtp.SMTPTransport.finishData (SMTPTransport.java: 820) at com.sun.mail.smtp.SMTPTransport.sendMessage (SMTPTransport.java: 322) ... I'm not sure if this is connected to my problem, http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4697158. But trying JavaMail 1.4.2, I see that the content transfer encoding of the email is still 7bit, so I'm not sure if using JavaMail 1.4.2 could solve the problem. Please take note that I could only do testing in our development environment that hasn't been able to replicate this. With the above exception, how would i know if this is from the sender or the receiver side? What debugging steps could you suggest? EDIT: Here is a DEBUG of the actual sending (masked some information): DEBUG: not loading system providers in &lt;java.home&gt;</a>/lib DEBUG: not loading optional custom providers file: /META-INF/javamail.providers DEBUG: successfully loaded default providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: not loading optional address map file: /META-INF/javamail.address.map DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth false DEBUG: SMTPTransport trying to connect to host "nnn.nnn.n.nnn", port nn DEBUG SMTP RCVD: 220 xxxx.xxxxxxxxxxx.xxx SMTP; Mon, 23 Mar 2009 15:18:57 +0800 DEBUG: SMTPTransport connected to host "nnn.nnn.n.nnn", port: nn DEBUG SMTP SENT: EHLO xxxxxxxxx DEBUG SMTP RCVD: 250 xxxx.xxxxxxxxxxx.xxx Hello DEBUG SMTP: use8bit false DEBUG SMTP SENT: MAIL FROM:<a href="newmsg.cgi?mbx=Main&[email protected]">&lt;[email protected]&gt;</a> DEBUG SMTP RCVD: 250 <a href="newmsg.cgi?mbx=Main&[email protected]">&lt;[email protected]&gt;</a>... Sender ok DEBUG SMTP SENT: RCPT TO:&lt;[email protected]&gt; DEBUG SMTP RCVD: 250 &lt;[email protected]&gt;... Recipient ok Verified Addresses &nbsp;&nbsp;[email protected] DEBUG SMTP SENT: DATA DEBUG SMTP RCVD: 354 Enter mail, end with "." on a line by itself DEBUG SMTP SENT: . DEBUG SMTP RCVD: 550 Requested action not taken: NUL characters are not allowed.

    Read the article

  • Authentication settings in IIS Manager versus web.config versus system.serviceModel

    - by Joe
    I'm new to ASP.NET :) I have a WCF web service, and I want to use Basic authentication. I am getting lost in the authentication options: In IIS 6 Manager, I can go in to the properties of the web site and set authentication options. In the web site's web.config file, under system.web, there is an <authentication mode="Windows"/> tag In the web site's web.config file, under system.serviceModel, I can configure: <wsHttpBinding <binding name="MyBinding" <security mode="Transport" <transport clientCredentialType="Basic"/ </security </binding </wsHttpBinding What is the difference between these three? How should each be configured? Some context: I have a simple web site project that contains a single .svc web service, and I want it to use Basic authentication over SSL. (Also, I want it to not use Windows accounts, but maybe that is another question.)

    Read the article

  • WCF using Spring.NET woes

    - by demius
    Hi everyone, I've torn out all but two hairs on my head trying to get my WCF services hosted in IIS 7.5. I'm using Spring.NET to create my service instances, but I'm having no luck getting it up and running. I encounter the following exception: Could not find a base address that matches scheme http for the endpoint with binding MetadataExchangeHttpBinding. Registered base address schemes are []. My WCF configuration is as follows: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="secureBinding" allowCookies="false"> <security mode="Transport"> <transport clientCredentialType="None"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="TestService"> <host> <baseAddresses> <add baseAddress="https://ws.local.com/TestService.svc"/> </baseAddresses> </host> <endpoint name="secureEndpoint" contract="Services.Interfaces.ITestService" binding="wsHttpBinding" bindingConfiguration="secureBinding" address="https://ws.local.com/TestService.svc" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> What am I missing here?

    Read the article

  • How to get a PerSession context with WCF?

    - by christophe31
    Hi, I got running a WCF service with custom binding, for now it use httpTransport. <customBinding> <binding name="myHttpBindingConf"> <context contextManagementEnabled="true" protectionLevel="None" contextExchangeMechanism="ContextSoapHeader" /> <textMessageEncoding/> <httpTransport useDefaultWebProxy="false" /> </binding> </customBinding> I've Made a custom IExtension<OperationContext> to stock my data in a specific context by following those instructions: http://hyperthink.net/blog/a-simple-ish-approach-to-custom-context-in-wcf/ I would like to use a ContextMode.PerSession context. Which transport choose to get Session management? How to set new transport in place and letting object discovery enabled? How to force a PerSession context?

    Read the article

  • WCF digest Authentication

    - by dudia
    What should be specified on the client side? Is this enough: binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Digest; ... cf.Credentials.HttpDigest.ClientCredential = new NetworkCredential("myuser", "mypass", "mydomain"); cf.Credentials.HttpDigest.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; What should be specified on the server side? obviously one needs: binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Digest; but where do one specify in the server the digest username\password to validate the client against? In addition when Micosoft says that Digest Authentication uses the Domain Controller, what does it mean? Does it validate username\password against it?

    Read the article

  • WCF. Https on basicHttpBinding

    - by Andrew Kalashnikov
    Hello, colleagues. I've written wcf service. Unfortunality I have to use basicHttpBinding for php callers. But I need securtiy. So I've decided use transport security with https. I host it at my IIS 6.0. I enable ssl at IIS and assign Certificate. But when I try open it through browser i get standard error. What's wrong. Please help. I can't issue that for several hours. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="https://192.168.0.8/MyService.svc" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors>

    Read the article

  • Dealing with ISO-encoding in AJAX requests (prototype)

    - by acme
    I have a HTML-page, that's encoded in ISO-8859-1 and a Prototype-AJAX call that's build like this: new Ajax.Request('api.jsp', { method: 'get', parameters: {...}, onSuccess: function(transport) { var ajaxResponse = transport.responseJSON; alert(ajaxResponse.msg); } }); The api.jsp returns its data in ISO-8859-1. The response contains special characters (German Umlauts) that are not displayed correctly, even if I add a "encoding: ISO-8895-1" to the AJAX-request. Does anyone know how to fix this? If I call api.jsp in a new browser window separately the special characters are also corrupt. And I can't get any information about the used encoding in the response header. The response header looks like this: Server Apache-Coyote/1.1 Content-Type application/json Content-Length 208 Date Thu, 29 Apr 2010 14:40:24 GMT Notice: Please don't advice the usage of UTF-8. I have to deal with ISO-8859-1.

    Read the article

  • parse search string

    - by Benjamin Ortuzar
    I have search strings, similar to the one bellow: energy food "olympics 2010" Terrorism OR "government" OR cups NOT transport and I need to parse it with PHP5 to detect if the content belongs to any of the following clusters: AllWords array AnyWords array NotWords array These are the rules i have set: If it has OR before or after the word or quoted words if belongs to AnyWord. If it has a NOT before word or quoted words it belongs to NotWords If it has 0 or more more spaces before the word or quoted phrase it belongs to AllWords. So the end result should be something similar to: AllWords: (energy, food, "olympics 2010") AnyWords: (terrorism, "government", cups) NotWords: (Transport) What would be a good way to do this?

    Read the article

  • Spring/RMI server error

    - by 4herpsand7derpsago
    We have a Spring MVC web app (WAR) deploying to Tomcat (6.0.35) that launches a thread inside a separate JVM at deploy time (don't ask why - not my design) and then communicates with that thread via RMI over port 8888. Despite being totally convoluded, this was working perfectly fine up until yesterday, and now the thread is failing at startup and despite our best efforts to add logging into the mix, we are hitting a wall. This is the only exception we are able to find in the logs: Jun 12, 2012 3:11:36 AM com.ourapp.ImageController destroy SEVERE: Shutdown Error: Lookup of RMI stub failed; nested exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused Jun 12, 2012 3:11:37 AM org.apache.catalina.core.StandardContext listenerStop SEVERE: Exception sending context destroyed event to listener instance of class org.springframework.web.context.ContextLoaderListener java.lang.NoClassDefFoundError: org/springframework/web/context/ContextCleanupListener at org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoaderListener.java:80) at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:3973) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4577) at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1165) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1271) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:296) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1337) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.ClassNotFoundException: org.springframework.web.context.ContextCleanupListener at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) ... 12 more The ImageController is the Spring MVC Controller that is responsible for kicking off this daemon/spawned RMI thread. Based on the verbage of this error, does anybody have any idea what might be causing this "connection refused" error? Running a netstat -an | grep 8888 (this is a Linux machine) produces no output which means nothing is listening on that port. Thanks in advance for any ideas/suggestions that lead to a fix. Edit: Here's another ConnectionException we're seeing: Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.<init>(Socket.java:375) at java.net.Socket.<init>(Socket.java:189) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 74 more

    Read the article

  • Unable to make 2 parallel TCP requests to the same TCP Client

    - by soldieraman
    Error: Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall Situation There is a TCP Server My web application connects to this TCP Server Using the below code: TcpClientInfo = new TcpClient(); _result = TcpClientInfo.BeginConnect(<serverAddress>,<portNumber>, null, null); bool success = _result.AsyncWaitHandle.WaitOne(20000, true); if (!success) { TcpClientInfo.Close(); throw new Exception("Connection Timeout: Failed to establish connection."); } NetworkStreamInfo = TcpClientInfo.GetStream(); NetworkStreamInfo.ReadTimeout = 20000; 2 Users use the same application from two different location to access information from this server at the SAME TIME Server takes around 2sec to reply Both Connect But One of the user gets above error "Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall" when trying to read data from stream How can I resolve this issue? Use a better way of connecting to the server Can't because it's a server issue if a server issue, how should the server handle request to avoid this problem

    Read the article

  • WCF. BasicHttpBinding Certificates.

    - by Andrew Kalashnikov
    Hello colleagues. I've got some problems. I've created WCF service with basicHttpBinding and hosted by IIS 6.0. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate storeLocation="LocalMachine" storeName="My" findValue="CN=Server" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> Also I have cert authority on this server and I issue certs for server and client. I server cert at server and client cert at client. When I try consume service from client I get famous: "Could not establish trust relationship for the SSL/TLS secure channel with authority" All sites recommend override ServicePointManager.ServerCertificateValidationCallback by set return value to true. Bu I want decide this issue other right way. My client config: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientBehavior"> <clientCredentials> <serviceCertificate> <authentication certificateValidationMode="ChainTrust" revocationMode="NoCheck"/> </serviceCertificate> <clientCertificate findValue="CN=PharmPortal" storeLocation="LocalMachine" storeName="My"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IRegistrator" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> </security> </binding> </basicHttpBinding> </bindings> <client > <endpoint address="https://aurit-server2/Registrator.svc" binding="basicHttpBinding" behaviorConfiguration="ClientBehavior" bindingConfiguration="BasicHttpBinding_IRegistrator" contract="ServiceReference1.IRegistrator" name="BasicHttpBinding_IRegistrator" > <identity> <dns value="Server" /> </identity> </endpoint> </client> </system.serviceModel> I set up client certificate. Why i get error?

    Read the article

  • how to create run time element? where i am doin wrong help....help.....

    - by vicky
    new Ajax.Request('Handler.ashx', { method: 'get', onSuccess: function(transport) { var response = transport.responseText || "no response text"; //alert("Success! \n\n" + response); var obj = response.evalJSON(true); alert(obj[0].Nam); alert(obj[0].IM); for(i = 0; i < 4; i++) { $('MyDiv').insert( new Element('checkbox', { 'id': "Img" + obj[i].Nam, 'value': obj[i].IM }) ); return ($('MyDiv').innerHTML); } }, onFailure: function() { alert('Something went wrong...') } });

    Read the article

  • not getting output from parmiko/ssh command

    - by Matt
    I am using paramiko/ssh/python to attempt to run a command on a remote server. When I ssh manually and run the command in question, I get the results I want. But if I use the python (co-opted from another thread on this site) below, there is no returned data. If I modify the command to be something more basic like 'pwd' or 'ls' I can then get the output. Any help is appreciated. Thanks, Matt import paramiko import time import sys, os, select import select username='medelman' password='Ru5h21iz' hostname='10.15.27.166' hostport=22 cmd='tail -f /x/web/mlog.txt' #works cmd='' #doesn't work client = paramiko.SSHClient() client.load_system_host_keys() client.connect(hostname=hostname, username=username, password=password) transport = client.get_transport() channel = transport.open_session() channel.exec_command(cmd) while True: rl, wl, xl = select.select([channel],[],[],0.0) if len(rl) 0: # Must be stdout print channel.recv(1024) time.sleep(1)

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • Send Mail through Jsp page.

    - by sourabhtaletiya
    hi friends ,i have tried alot to send mail via jsp page but i am not succeded. A error is coming javax.servlet.ServletException: 530 5.7.0 Must issue a STARTTLS command first. x1sm5029316wbx.19 <html> <head> <title>JSP JavaMail Example </title> </head> <body> <%@ page import="java.util.*" %> <%@ page import="javax.mail.*" %> <%@ page import="javax.mail.internet.*" %> <%@ page import="javax.activation.*" %> <% java.security.Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); Properties props = System.getProperties(); props.put("mail.smtp.starttls.enable","true"); props.put("mail.smtp.starttls.required","true"); String host = "smtp.gmail.com"; String to = request.getParameter("to"); String from = request.getParameter("from"); String subject = request.getParameter("subject"); String messageText = request.getParameter("body"); boolean sessionDebug = false; props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.transport.protocol", "smtp"); props.put("mail.smtp.port", "25"); props.put("mail.smtp.auth", "true"); props.put("mail.debug", "true"); props.put("mail.smtp.socketFactory.port","25"); props.put("mail.smtp.starttls.enable","true"); Session mailSession = Session.getDefaultInstance(props, null); mailSession.setDebug(sessionDebug); Message msg = new MimeMessage(mailSession); props.put("mail.smtp.starttls.enable","true"); msg.setFrom(new InternetAddress(from)); InternetAddress[] address = {new InternetAddress(to)}; msg.setRecipients(Message.RecipientType.TO, address); msg.setSubject(subject); msg.setSentDate(new Date()); msg.setText(messageText); props.put("mail.smtp.starttls.enable","true"); Transport tr = mailSession.getTransport("smtp"); tr.connect(host, "sourabh.web7", "june251989"); msg.saveChanges(); // don't forget this props.put("mail.smtp.starttls.enable","true"); tr.sendMessage(msg, msg.getAllRecipients()); tr.close(); // Transport.send(msg); /* out.println("Mail was sent to " + to); out.println(" from " + from); out.println(" using host " + host + ".");*/ %> </table> </body> </html>

    Read the article

  • how to view internal jaxws logs in tomcat

    - by prmatta
    I have a web service that is deployed in tomcat, and it is rejecting a soap request over https. However, I can't see any logs as to why it is doing so. I have the following set in my service endpoint implementation file: System.setProperty("javax.net.debug", "all"); System.setProperty("java.security.debug", "all"); And I pass the following parameters to tomcat: -Dcom.sun.xml.ws.transport.http.HttpAdapter.dump=true -Dcom.sun.xml.ws.transport.http.client.HttpTransportPipe.dump=true Is there anything else I need to do to see the internal jaxws logs? Are there some other loggers I need to enable?

    Read the article

  • Make element visible on ajax in JSF2

    - by amorfis
    I have dataTable in my page. Initially I want it to be hidden, and show after fetching data by AJAX request. I know how to fetch data and put into table, but I don't know how to show table if it is hidden. Here is the code: <h:commandButton value="aa"> <f:ajax execute="from to validTo" render="transportOffers"/> </h:commandButton> <p:dataTable id="transportOffers" value="${cargoOffer.transportsForCargo}" var="transport"> <p:column> <h:outputText value="${transport.company}"/> </p:column> </p:dataTable> Table is visible initially, even if it is empty. If I set rendered="false" it is invisible, and remains invisible also after AJAX request. How can I make it hidden initially, and to show up after populating with data?

    Read the article

  • please help me to find out where i am doing mistake in this code? i wnat retieve the value that i am

    - by user309381
    function reload(form) { var val = $('seltab').getValue(); new Ajax.Request('Website.php?cat=' +escape(val), { method:'get', onSuccess: function(transport){ var response = transport.responseText ; $("MyDivDB").innerHTML = transport.responseText ; alert("Success! \n\n" + response); }, onFailure: function(){ alert('Something went wrong...') } }); } </script> </head> title author pages $con = mysql_connect($dbhostname,$dbuserid,$dbpassword); if(!$con) { die ("connection failed".mysql_error()); } $db = mysql_select_db($dbname,$con); if(!$db) { die("Database is not selected".mysql_error()); } $query ="SELECT * FROM books NATURAL JOIN authors" ; $result = mysql_query($query); if(!$query) { die("Database is not query".mysql_error()); } while($row = mysql_fetch_array($result,MYSQL_ASSOC)) { $title = $row["title"]; $author = $row["author"]; $page = $row["pages"]; echo "<tr>"; echo "<td>$title</td>"; echo "<td>$author</td>"; echo "<td>$page</td>"; echo "</tr>"; } print "</table>"; echo "<select id = seltab onchange = 'reload(this.form)'>"; $querysel = "SELECT title_id,author FROM authors NATURAL JOIN books"; $result1 = mysql_query($querysel) ; while($rowID = mysql_fetch_assoc($result1)) { $TitleID = $rowID['title_id']; $author = $rowID['author']; print "<option value = $author>$author\n"; print "</option>"; } print "</select>"; ? Wbsite.php

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >