Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 426/683 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Can't connect to VPN in Windows XP mode

    - by darkstar13
    I have Windows 7 x32 installed on my laptop. I have also Windows XP mode installed. My setup is that my work-remote programs are in Windows XP mode because my VPN installer in Windows XP only. Lately, I have been having troubles getting on / logging in to VPN. I can access the internet in WinXP mode but When I ping the IP address of the target IP of my VPN network (or even just Google.com), I always get a 'Request Timeout'. However, when I ping the same IP address in command prompt in Windows 7, I get 100% data sent. Is there anything I need to adjust? Before, I have been able to connect instantly. Now, it's like trial and error, or I will have to wait for hours just to be able to enter logon credentials in Cisco VPN dialer. NAT is my network adapter in XP mode.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Hosting multiple sites on a single webapp in tomcat

    - by satish
    Scenario: I have a website - www.mydomain.com. Registered users will be given the choice of getting a permanent url to their account on mydomain.com as a subdomain like (username.mydomain.com) or they can opt to have their own domain like www.userdomain.com. So the user can access his/her account through the subdomain URL or their own hostname and the request should be forwarded to a specific url on mydomain.com. For example: xyz.mydomain.com or www.xyz.com should give the user account from www.mydomain.com/webapp/account?id=xyz. The user should be completely unaware about where the content is coming from. Setup: My website is running as a webapp in tomcat 5.5.28 with apache as the web server. I am using a VPS which means I have control over all the configuration files (apache, tomcat and dns server). Can you tell me what are the configurations needed to achieve the above scenario??

    Read the article

  • How can I avoid repeating DocumentRoot in this Apache virtual host?

    - by David Faux
    I have an Apache virtual host configured for a website powered by Wordpress. <VirtualHost *:80> ServerName 67.178.132.253 DocumentRoot /home/david/wordpressWebsite # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^index\.php$ - [L] RewriteCond /home/david/wordpressWebsite%{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress </VirtualHost> How can I avoid hard-coding /home/david/wordpressWebsite twice? I don't want to use REQUEST_URI since that involves an extra request.

    Read the article

  • Configure IIS 7 Reverse Proxy to connect to TeamCity Tomcat

    - by Cynicszm
    We have an IIS 7 webserver configured and would like to create a reverse proxy for a TeamCity installation using Tomcat on the same machine. The IIS server site is https://somesite and I would like the TeamCity to appear as https://somesite/teamcity redirecting to http://localhost:portnumber I have installed the IIS URL Rewrite extension from http://www.iis.net/download/URLRewrite and the Application Request Routing from http://www.iis.net/download/ApplicationRequestRouting to try and setup a reverse proxy but can't get it working. The closest answer I found is an old StackOverflow question http://stackoverflow.com/questions/331755/how-do-i-setup-teamcity-for-public-access-over-https which unfortunately doesn't have a working example. I've searched a quite a bit but can't seem to find a relevant example. Any help appreciated (apologies for the bold but the spam prevention won't let me post more than 1 hyperlink)

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Work Item Keyboard Shortcuts, Resolving Mercurial Work Items, WikiPlex 2.0

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed the latest version of the CodePlex software yesterday. Keyboard Shortcuts With this release, we have added a set of keyboard shortcuts for common tasks in the Issue Tracker.  This feature is a popular request in the CodePlex Issue Tracker.  The CodePlex team visits the issue tracker frequently when researching and considering new features.  If you haven’t visited it recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.   To view the available shortcuts, type ? from any page within the issue tracker to see this help dialog: You can see what each shortcut invokes below: Please give us feedback on this feature and let us know what additional shortcuts would be useful. Resolve Work Items When Pushing Mercurial Changes Another feature we added is the ability to resolve work items when push changes to your Mercurial repository, which has been available to our TFS / SVN users for quite some time. The required format is identical to the SVN format listed here. When committing your changes locally, add "Work Items: Id, AnotherId" to your commit message. When you push, CodePlex will detect this comment, add a commit message, and resolve the work item. WikiPlex Goes 2.0! CodePlex continues to improve WikiPlex, our open source wiki engine.  Wikiplex hit another major milestone today with the release of version 2.0!  We have added several new features, including:  interleaving ordered and unordered lists, specifying the height and width for images, a multi-line indentation macro, and a restructuring of some of the API. Visit Matt's announcement for more information on the release or grab the binaries via NuGet or CodePlex.

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Squid traffic tunneled through VPN

    - by NerdyNick
    So what I'm trying to do is have a Squid Proxy run on 1 machine along side a VPN connection. What I want to happen is all traffic running though the Squad Proxy would run though the VPN for its outbound. ie Desktop - (Squid Proxy - VPN) The goal is to allow my desktop selective tunneling through the VPN. So that Instant Messaging and the like that do not need to run through the VPN can go through my normal traffic. Typically I would go though a SSH Proxy but currently am forced to use VPN to gain entry into the office, and a Squid proxy seemed like it might work out the easiest for what I am needing. EDIT Realize I forgot to actually state what problem I'm running into. I have the Squid setup and verified it works, but once I connect to the VPN. All requests to Squid get accepted but Squid is unable to make the request over the VPN. So the client ends up just sitting there.

    Read the article

  • Possible to use LVM partitions inside a vmbuilder created KVM virtual machine?

    - by Tauren
    I have an Ubuntu 9.10 host system with LVM partitions running KVM. I've been creating VMs using vmbuilder using LVM partitions for each VM instead of files for the VMs. When I configure a VM using vmbuilder --part, the partitions in the file I'm using are created as regular partitions (sda1, sda2, etc.). What I'd like to do is use LVM inside of the VM in case I need to resize the partitions at some point. But I don't see any options for doing that using the vmbuilder tool. It seems like this might be a common request to avoid using kpartx, etc. Is there something I'm missing, or is this just not possible with vmbuilder?

    Read the article

  • TFS2010 - Correctly setting up Host Header

    - by Keith Barrows
    We have a TFS2010 install on a Win2008R2 Server running IIS7. I've created a host header for TFS and want to use that instead of machineName:8080. I am getting weird behavior from it now. Every other time I log in I get: TF31002: Unable to connect to this Team Foundation Server: http://web2/tfs. Team Foundation Server Url: http://web2/tfs. Possible reasons for failure include: - The name, port number, or protocol for the Team Foundation Server is incorrect. - The Team Foundation Server is offline. - The password has expired or is incorrect. Technical information (for administrator): The request failed with HTTP status 404: Not Found. I force a reconnect and voila - there it is. Also, connecting to the web site rarely works but connecting via VS2010 works 50% of the time. What do i need to change to stabilize this?

    Read the article

  • Belkin router issue

    - by walr1
    Hi, My cousin and I bought a wireless Belkin router for testing purposes. Please keep in mind for all of our tests there is no ethernet cable plugged in, just the router's power cord. We have been trying to "flood" it with PING requests on its default address 192.168.2.1, but it isn't doing a thing; not even logging any attempts of too many requests. I've disabled the firewall, disabled PING request block, etc. Any idea why this thing isn't being affected? We sent 4 million packets and it hasn't done a thing. Quite odd! Thanks.

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Setting up routing for MS DirectAccess to a VMWare EsXi Host

    - by Paul D'Ambra
    I'm trying to set up DirectAccess on a virtual machine so I can demonstrate it's value and then if need be add a physical machine to host it. I'm hitting a problem because the Direct Access machine (DA01) needs to have 2 public addresses actually configured on the external adapter but there is a Zyxel Zywall USG300 between the VMware ESXi host and the outside world. I've summarised my setup in this diagram If I ping from the LAN to 212.x.y.89 I get a response but if I ping from the VM I get destination host unreachable. I used "route add 212.x.y.89 192.c.d.1" and get request timed out. At that point I see outbound traffic allowed on the Zyxel firewall but nothing coming back. I'm past my understanding of routing and VMWare so am not sure how to tie down where my problem lies (or even if this setup is possible). So any help massively appreciated. Paul

    Read the article

  • Apache Reverse Proxy server and SSL NTLM SharePoint

    - by user50211
    Hi, I'm trying to set Apache as proxy server to an internal SharePoint server. I have previously configured Apache to run as a proxy server to export internal webpages and web applications. However, the Sharepoint is using SSL and NTLM authentication, and this is new to me :( I have tried many options, the traffic seems to be forwared as I get the authentication popup window, but when I insert the user/pass, I get back to the same popup window. Anybody has configured Apache to do so? Here is a part of my httpd.conf: <VirtualHost *:443> ServerName repository.out.com SSLProxyEngine On RequestHeader set Front-End-Https "On" ProxyRequests Off ProxyPreserveHost On ProxyPass / https://sharepoint.in.com ProxyPassReverse / https://sharepoint.in.com CacheDisable * SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 ErrorLog logs/jlanza_log CustomLog logs/jlanza_log common </VirtualHost>

    Read the article

  • Apache: how to set custom 401 error page and save original behaviour

    - by petRUShka
    I have Kerberos-based authentication with Apache/2.2.3 (Linux/SUSE). When user is trying to open some url, browser ask him about domain login and password like in HTTP Basic Auth. If user cancel such request 3 times Apache returns 401 Authorization Required error page. My current virtual host config is <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride None Order allow,deny Allow from all AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd On Krb5KeyTab /etc/httpd/httpd.keytab require valid-user </Directory> I want to set nice custom 401 error page with some instructions for users. And I added such line in virtual host config: ErrorDocument 401 /pages/401 It works, when user can't authorize apache redirects him to my nice page. But Apache doesn't ask user login\password as it did before. I want this functionality and nice error page simultaneously! Is it possible to make it works properly?

    Read the article

  • Learning from jQuery - Solid fundament for experienced jQuery developers

    Frankly speaking, I had to sleep a night over before typing this review. And even now it is not an easy, straight-forward task to write this recension. I'm not sure whether I'm the right kind of audience this title is actually addressed to. It clearly states that this book is for web developers which are very familiar with jQuery library but would like to extend their knowledge to vanilla JavaScript. Not being part of this particular group it felt strange to go through the various chapters after all. This title is clearly addressed to experienced jQuery users and developers especially while looking for improvements in performance and better ways of optimisations. Sometimes just to simplify the existing jQuery code in order to avoid the heavy load of the complete jQuery library and sometimes for the better understanding of JavaScript and its syntax. Callum's style of writing is clear and the numerous code samples used to emphasize the various techniques are good ones and easy to understand. Quite interestingly, it put a light smile on my face when I compared his sample code of sending an AJAX request to some code in one of my own blog articles I wrote back in 2006 (in German language). JavaScript is clearly a mature language and certain requirements are simply done this way. And Callum explains the nuts and bolts of JavaScript very well. Personally, I gained most out of this book from chapter 5 - JavaScript Conventions. The paragraphs and code snippets on Optimizations and Common Antipatterns gave me a better understanding on various aspects of JavaScript development, and I definitely have to revise a couple of code fragments I have written in the past. Overall the book provides solid information on JavaScript for jQuery developers and is worth the money spent. Just be sure that you're part of the targeted audience.

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Items Affecting Performance of the MySQL Database

    - by Antoinette O'Sullivan
    To learn about the many factors that can affect the performance of the MySQL Database, take the MySQL Performance Tuning course. You will learn: How your hardware and operating system can affect performance How to set up and logging to improve performance Best practices for backup and recovery And much more You can take this 4-day instructor-led course through the following formats: Training-on-Demand: Start training within 24 hours of registering for training, following lectures at your own pace through streaming video and booking time on a lab environment to suit your schedule. Live-Virtual Event: Attend a live event from your own desk, no travel required. Choose from a selection of events on the schedule to suit different time-zones. In-Class Event: Travel to an education center to attend this course. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  Brussels, Beligum  10 November 2014  English  Sao Paolo, Brazil  25 August 2014  Brazilian Portuguese  London, England  20 October 2014  English  Milan, Italy  20 October 2014  Italian  Rome, Italy  1 December 2014  Italian  Riga, Latvia  29 September 2014  Latvian  Petaling Jaya, Malaysia  22 September 2014  English  Utrecht, Netherlands  10 November 2014  English  Warsaw, Poland  1 September 2014  Polish  Barcelona, Spain  14 October 2014  Spanish To register for an event, request an additional event, or learn more about the authentic MySQL Curriculum, go to http://education.oracle.com/mysql.

    Read the article

  • Redirect 'host-based' requests to a port (inside a docker container)

    - by Disco
    I'm trying to achieve this fun project of having multiple 'postfix/dovecot' instances inside a docker container. I'm searching for 'something' that would redirect any incoming request on port 25 (any maybe later 143, 993) to the right container on a different port. Here's the idea : +-------+ +----------+ (internet)----(port 25) |mainbox| ---- (port 52032) |container1| (postfix) +-------+ | +----------+ \ (port 52033) +----------+ |container2| (postfix) +----------+ So the idea is to 'redirect' requests coming to port 25 and based on 'hostname' to forward to the right port (internally); ideally, it would be great to manage this 'mapping' with a database/textfile Any ideas ? Directions ?

    Read the article

  • How can I diagnose a "502 Bad Gateway" response from an Apache/Tomcat configuration?

    - by Structure
    I just finished up configuring a fairly default configuration of Tomcat. My Apache configuration was pre-existing and post-tomcat it still has no issues. I am using mod_jk to (if I am saying this correctly) interface between Apache and Tomcat and have my conf files setup for my workers, etc. I put my test file (Simply: http://tomcat.apache.org/tomcat-4.1-doc/appdev/sample/web/hello.jsp) into my tomcat/webapps/ directory and then call it via http://localhost/test/hello.jsp. From here Apache returns a "502 Bad Gateway" response. I confirmed this via the Apache logs, but beyond that I have no idea how to diagnose the issue. I assume the 502 is because Tomcat did not respond. I'd like to confirm if Tomcat received the request, but cannot locate the log file. At this point I had thought my installation was complete, so not sure where to go from here. Any input would be appreciated.

    Read the article

  • NodeJS Supervisord Hashlib

    - by enedebe
    I have an problem with my NodeJS app. The problem is the include of the library Hashlib I've followed more than 10 times the instructions to install. Get a clone of the repo, do make and make install. NodeJS is installed in default path, and that's the tricky point: When I launch node app.js it works, perfectly. The problem starts when I configured my Supervisord to run with the same user, with the same config file as I have in other systems working, and I get that NodeJS can't find hashlib. module.js:337 throw new Error("Cannot find module '" + request + "'"); ^ Error: Cannot find module 'hashlib' I'm getting crazy, what can I do?! Why my user launching node from the console works great, but not the supervisord? Thanks!

    Read the article

  • how to change document root to public_html from root directory

    - by manish
    For testing I hosted my website on free server from 000webhost.com They have a directory structure:- (root folder) \ (public folder) \public_html this directory structure enables to keep all the library files in root folder and all public data in \public_html, so I developed my website accordingly, and my final structure looked like:- / /include(this folder contains library files) /logs(log files) /public_html /public_html/index.php /public_html/home.php /public_html/and other public files on 000webhost makes only public_folder available to be accessed via url and my url looked neat and clean like www.xample.com/index.php or www.example.com/home.php but after completion of development I moved website to shared host purchased from go-daddy.com, now they do not have any such kind of directory permission, all the files are kept in root folder and are accessible via url also url has become like:- www.example.com/public_html/home.php or www.example.com/public_html/index.php How should I redirect url request to public_html folder again so as to make library file unavailable to public access and make url neat and clean.

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    Hi, I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >