Search Results

Search found 18210 results on 729 pages for 'website promotion'.

Page 536/729 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • String length difference between ruby 1.8 and 1.9

    - by Raghu
    I have a website thats running on ruby 1.8.7 . I have a validation on an incoming post that checks to make sure that we allow upto max of 12000 characters. The spaces are counted as characters and tab and carriage returns are stripped off before the post is subjected to the validation. Here is the post that is subjected to validation http://pastie.org/5047582 In ruby 1.9 the string length shows up as 11909 which is correct. But when I check the length on ruby 1.8.7 is turns out to be 12044. I used codepad.org to run this ruby code which gives me http://codepad.org/OxgSuKGZ ( which outputs the length as 12044 which is wrong) but when i run this same code in the console at codeacademy.org the string length is 11909. Can anybody explain me why this is happening ??? Thanks

    Read the article

  • ASP.NET 4 - IIS 7 - Request timed out - Request timed out

    - by sharru
    My website is running on Asp.net v4 , IIS 7 , Windows server 2008. My cpu is running on 20-30% and the site is responding quickly. Every 2-5 mins i'm receiving the following error: Event code: 3001 Event message: The request has been aborted. Exception type: HttpException Exception message: Request timed out. , Request information: Request URL: http://www.xxxx.com/Services/AxRefresh.asmx/AxUpdate Request path: /Services/AxRefresh.asmx/AxUpdate User host address: 84.110.251.198 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE i read that the error is related to the maximum concurrent requests limit http://support.microsoft.com/kb/821268 but then i found out that on IIS 7 this limitation is changed and not relevant. http://msdn.microsoft.com/en-us/library/dd560842(VS.100).aspx Any other ideas what can be the problem or where to start looking ? Thx!

    Read the article

  • localhost lookup fails, browser tries www.localhost.com instead

    - by Maen
    I used to run web applications all the time on my laptop, no problems, I am using VWD 2008 Express, i have the latest framework, Windows Vista Home Basic...etc.. Now, when ever i try to run a website, or even chose to Show a Page in Browser from Within VWD, the browser (both IE and Firefox) keeps looking for www.localhost.com... I tried to copy the address of and paste it directly in the title bar, nothing, same problem i tried to get that address from the balloon notification (the one that pops up when you run any ASP.net project), still nothing happens... My colleague is facing the same problem, but for him, he can simply copy and paste the url in the address bar, but its not working with me....Heeeeeellllllllllllllllp

    Read the article

  • How to avoid "Page Expired" messages on logout with Wicket after redirect

    - by John
    Hi there. I'm taking my first steps in to the world of Wicket. I've set up a login form and a logout page. The logout page looks like this: public class LogoutPage extends WebPage { public LogoutPage(final PageParameters pageParams) { getSession().invalidate(); setResponsePage(getApplication().getHomePage()); } } When a user clicks the Logout link they're redirected to the homepage, so far so good. The problem is that when a user tries to click on any of the navigatable Link's they get a "Page Expired" message. I'd like to log the user out gracefully and allow the user to continue to browse the website without receiving the "Page Expired" message. Any help is greatly appreciated. By the way, I've tried to follow the Wicket examples but they seem to be using org.apache.wicket.authentication and I haven't figured out how to make this available to my project (I'm using eclipse and maven2).

    Read the article

  • Accessing Squid Proxy over internet

    - by prateekdayal
    Hi, I recently finished installing Squid on a VPS I have in the US and its working fine locally (I verified by setting http_proxy variable and using lynx). I want to access this proxy over the internet (as an anonymizer) so that I can see how some ads show up for US traffic on my website. I have setup authentication so abuse is not a problem. However, I am not able to access the proxy over the internet. I have set the following rule in squid.conf http_access allow all Is this not possible to do what I want or I am missing something? The port 3128 is open in the firewall so that is not an issue. Squid is running on 0.0.0.0 Thanks Prateek

    Read the article

  • php, curl , php curl , multipart/form-data , upload picture redirect

    - by Michael
    I'm trying to upload some pictures using php cURL on a classified ad website .I think that I set all the parameters properly but I see that there is a kind of redirect after I post the picture . The issue is that the url where I'm getting redirected gives 404 error instead to return the html that it does when I make the post with a normal browser . here is the php code that I have so far " $URL = "http://api.classistatic.com/api/image/upload"; $s = "PAD001"; $v = "2"; $n = "k"; $a = "1:a126581b8150ddc1337cabce28f2feb53849fd143bd6e42649f90175c0e023e3"; $u = "@/var/www/html/artwork/tmp/!BszBLV!EGk~$(KGrHqEOKicEvMi8HVg(BL5ZbWvs0g~~_1.JPG"; $htmlContent = $baseClass-processPicturerequest($URL, $s, $v, $b, $n, $a, $u); The log from server is as following : http://pastebin.com/gZqPgsFX

    Read the article

  • Ruby on Rails vs Grails vs. Spring ROO vs. Spring App

    - by lizdev
    Hi, I'm planning on writing a simple web application that will be used by lots of users (as complicated as a simple bookmarking app) and I'm trying to decide which framework/language to use. I'm very experienced with Spring/Hibernate and Java in general but new to both Grails and RoR (and Spring ROO). The only reason I'm considering RoR is because Java hosting is MUCH more expensive than RoR hosting (which is supported by almost any hosting vendor for 5$ per month). Assuming the price wasn't an issue, which one of the frameworks/languages mentioned above would you recommend for a Java developer (who knows how to configure Spring/Hibernate etc.)? I'm afraid that by using RoR I won't be able to easily support many users who are using the website at the same time. thanks

    Read the article

  • installing xuggler java libraries in Tomcat on Linux

    - by Dilip Shah
    I'm trying to install xuggler Java libraries in Tomcat (version 5.5) on fedora-release-7-3 Should I install the binaries available for download on xuggler website or build my own (http://www.xuggle.com/xuggler/downloads/build.jsp)? I took the easy step first and installed the readymade binaries downloaded from http://www.xuggle.com/xuggler/downloads/ in usr/local/xuggler folder on my Linux server and then copied the jar files from share/java/jars folder to Tomcat's $CATALINA_HOME/shared/lib directory (as recommended by http://wiki.xuggle.com/Frequently_Asked_Questions#I_get_an_.22UnsatisfiedLinkError.22_when_I_run_Xuggler-based_Applications_in_Tomcat These are some 6 .jar files, including xuggle-xuggler.jar After restarting Tomcat, I'm still getting "java.lang.UnsatisfiedLinkError: no xuggle-xuggler in java.library.path" exception when my Java code attempts to invoke some xuggler method such as the one to find video duration of an flv file. What am I doing wrong? Any help is very much appreciated!

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Save Powerpoint 2007 as Powerpoint 2003 using Open Office SDK 2.0

    - by user299592
    Is there anyway to use the Open Office SDK 2.0 to save a powerpoint presention that you created using OOXML to a Office 2003 powerpoint presentation? I know if you open a 2007 file and click Save As you have the option to save it as a Powerpoint 97 to 2003 document and I didn't know if I could do this grammatically using this SDK. The reason I am asking this question is because I need to give the user the option to save data on a website in either office 2007 or 2003 format. I much rather just use the same code to produce the document instead of having to have two code paths for powerpoint 2003 and powerpoint 2007.

    Read the article

  • LinkDemand error on webserver when using TraceSource

    - by robertpnl
    Hi, On a webserver (shared hosting provider) I published a website with a ADO.Net Framework model in use with MySql Connector 6.3.1. When I request a page, a Security Exception will be happen with this error messages: "LinkDemand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer ". This exception raised when code collect the listeners of a tracksource: public class MySqlTrace { private static TraceSource source = new TraceSource("mysql"); static MySqlTrace() { foreach (TraceListener listener in source.Listeners) // <-- Exception throw here { // ... } } } The web.config doesn't have any trace data or system.diagnostics. My question is, why will a get a LinkDemand security exception during collecting the source listeners. What can maybe be wrong in here?

    Read the article

  • Codeigniter benchmarking, where are these ms coming from?

    - by ropstah
    I'm in the process of benchmarking my website. class Home extends Controller { function Home() { parent::Controller(); $this->benchmark->mark('Constructor_start'); $this->output->enable_profiler(TRUE); $this->load->library ('MasterPage'); $this->benchmark->mark('Constructor_end'); } function index() { $this->benchmark->mark('Index_start'); $this->masterpage->setMasterPage('master/home'); $this->masterpage->addContent('home/index', 'page'); $this->masterpage->show(); $this->benchmark->mark('Index_start'); } } These are the results: Loading Time Base Classes: 0.0076 Constructor: 0.0007 Index: 0.0440 Controller Execution Time ( Home/ Index ): 0.4467 Total Execution Time: 0.4545` I understand the following: Loading Time Base Classes (0.0076) Constructor (0.0007) Index (0.0440) But where is the rest of the time coming from?

    Read the article

  • CSS positioning breaks in Safari

    - by user298638
    Hi all, i have a website in which i am trying to position (using CSS) a certain on the page. the is absolutely positioned and is located inside a relatively positioned paernt . on firefox and even IE it looks ok but on Safari, things get messy and it shows 5 pixels lower than it should. i have tried to figure out for days now what is wrong there but cannot seem to see it. you can find an example link to the problematic page here: http://yaronattar.com/index.php?option=com_content&view=article&id=117:the-new-lovers-2010&catid=51:the-new-lovers-2010&Itemid=242 the problematic is the one conaining the "previous/next" navigation at the bottom right corner of the page. anyone sees what is causing the trouble here? thanks

    Read the article

  • Mobile Device emulator cannot access localhost

    - by Diana
    I am using Windows Mobile 6 Professional Emulator and Windows Mobile Device Center. I connected and cradled the emulator to my computer. I am trying to connect from the browser of the emulator to a webservice that is deployed in the IIS of my computer (same machine where the emulator is installed). If I connect my computer to the internet, I can access any website, including my local WS (using the IP returned by ipConfig). The problem is when I disconnect the computer from the Internet: I cannot access my local web service using the IP (internal one returned by ipconfig), or machine name. Do you have any ideea what settings am I missing? I am sure this it's possible somehow, I just don't know how... PS: The goal is to access the WS from a mobile application, but until I cannot access it from the browser, I cannot access it from the application either. Thank you!

    Read the article

  • Serving static content from cookie less domain and mod_deflate

    - by Saif Bechan
    I have two domain. One domain with my main website and the other with js/css/etc.. files, static content. mod_deflate is enabled for both domains, but when i run ySlow in FireFox it says none of my static content is compressed. When i bring back the js or css file to my normal domain it gets compressed right. Only when its served from the other domain is it not compressed. Do i have to do some more configuration for this to work? I am using this line in my .htaccess file AddOutputFilterByType DEFLATE application/x-javascript text/css text/html text/xml text/plain application/x-httpd-php I tried to but the line in my httpd.conf file but it gives me the same results. PS. If this is more of a serverfault question i am sorry for this. But i see a lot of questions here concerning mod_deflate and ySlow

    Read the article

  • joomla: editing the mvc package for joomla component development

    - by PROFESSOR
    hi! i m new to joomla component development i hv just downloaded bunch of files from some jooomla mvc generater website.which llok like smthng like this hello.xml - frontend/index.html - frontend/hello.php - frontend/controller.php - frontend/models/index.html - frontend/models/hello.php - frontend/views/index.html - frontend/views/hello/index.html - frontend/views/hello/view.html.php - frontend/views/hello/metadata.xml - frontend/views/hello/tmpl/index.html - frontend/views/hello/tmpl/default.php - frontend/views/hello/tmpl/default.xml - frontend/assets/index.html - frontend/assets/images/index.html - backend/index.html - backend/admin.hello.php - backend/controller.php - backend/CHANGELOG.php - backend/views/index.html - backend/views/hello/index.html - backend/views/hello/view.html.php - backend/views/hello/tmpl/index.html - backend/views/hello/tmpl/default.php - backend/models/index.html - backend/models/hello.php - backend/assets/index.html - backend/assets/images/index.html - languages-front/en-GB/en-GB.com_hello.ini - languages-admin/en-GB/en-GB.com_hello.ini MVC Generator version 1.0.5 but dont know how to edit and where to edit those files pls help i m trying to make my only php based application to a joomla component

    Read the article

  • How to best develop web crawlers

    - by Fernando Barrocal
    Heyall, I am used to create some crawlers to compile information and as I come to a website I need the info I start a new crawler specific for that site, using shell scripts most of the time and sometime PHP. The way I do is with a simple for to iterate for the page list, a wget do download it and sed, tr, awk or other utilities to clean the page and grab the specific info I need. All the process takes some time depending on the site and more to download all pages. And I often steps into an AJAX site that complicates everything I was wondering if there is better ways to do that, faster ways or even some applications or languages to help such work.

    Read the article

  • header and footer in Prawn PDF

    - by Nik
    Hello all, I have read through all relevant posts on Prawn but found no mentioning (even in Prawn's own documentation) of headers and footers. However, I did see a demo on Prawnto's own website about headers and footers. I copied the entire source of that demo just to see if it works but an error of undefined method "header" is complained about. Am I to understand that Prawn took out header and footer recently in the gem or is there something else I need to do first to use header and footer? The demo page: http://cracklabs.com/prawnto/code/prawn_demos/source/text/flowing_text_with_header_and_footer the part of code of concern: Prawn::Document.generate("flow_with_headers_and_footers.pdf") do header margin_box.top_left do text "Here's My Fancy Header", :size = 25, :align = :center end text "hello world!" end And by header, just in case, I mean the snippets of words that appear usually at a corner of every page of a document. Like your account number in your bills pages. thanks!

    Read the article

  • WCF Security Transport Security Questions

    - by shyneman
    I'm writing a set of WCF services that rely on transport security with Windows Authentication using the trusted subsystem model. However, I want to perform authorization based on the original client user that initiated the request (e.g. a user from a website with a username/password). I'm planning to achieve this by adding the original user's credentials in the header before the client sends the message and then the service will use the supplied credentials to authorize the user. So I have a few questions about this implementation: 1) using transport security with windows auth, I do NOT need to worry about again encrypting the passed credentials to ensure the validity... WCF automatically takes care of this - is this correct? 2) how does this implementation prevent a malicious service, running under some windows account within the domain, to send a message tagged with spoofed credentials. for e.g. a malicious service replaces the credentials with an Admin user to do something bad? Thanks for any help.

    Read the article

  • ColdFusion 9 Cluster with IIS7.5

    - by Adam Winter
    Does anyone know of a step by step process of setting up a ColdFusion 9 cluster using IIS 7.5? Either failover or network load balance would be nice. Without IIS being clusterable in Windows 2008 R2, I'm not sure of the best means to configure the web server and ColdFusion service. Some of the things I'm looking for are..... With load balancing, what do you use out in front of the servers as a load balancer? If you're using a network share on a clustered file server for the website data files, how do you configure the ColdFusion service to run so that it has network access instead of running as Local System?

    Read the article

  • page posting issue when working in Screen Scraping

    - by Muhammad Akhtar
    Hi, I am working on screen scraping and done successfully in 3 websites, I have an issue in last website here is my url, When I hit with my parameter, it is showing result on next page, simply posting to other page and showing the result fine on other page Here is My Test However, when I hit from my application, since here I don't have an option to post, it only fetch html of requested page that is obviously my above mention HTML test link, that actually have parameter in URL to get the result. How can I handle this situtation? Please give me hint. Thanks here is my C# code, I am using HTMLAgality String url; HtmlWeb hw = new HtmlWeb(); HtmlDocument doc; url = "http://mysampleURL"; doc = hw.Load(url);

    Read the article

  • php doctrine last identifier issue

    - by mike
    I'm running in to the issue below when trying to run the following: $store = new Store(); $store->url =$this->form_validation->set_value('website'); $store->save(); $store_id = $store->identifier(); Fatal error: Uncaught exception 'Doctrine_Connection_Exception' with message 'Couldn't get last insert identifier.' in /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Connection/UnitOfWork.php:932 Stack trace: #0 /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Connection/UnitOfWork.php(632): Doctrine_Connection_UnitOfWork->_assignIdentifier(Object(Category_store_assn)) #1 /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Connection/UnitOfWork.php(562): Doctrine_Connection_UnitOfWork->processSingleInsert(Object(Category_store_assn)) #2 /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Connection/UnitOfWork.php(81): Doctrine_Connection_UnitOfWork->insert(Object(Category_store_assn)) #3 /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Record.php(1691): Doctrine_Connection_UnitOfWork->saveGraph(Object(Category_store_assn)) #4 /home/yummm/public_html/system/application/controllers/auth.php(375): Doctrine_Reco in /home/yummm/public_html/system/application/plugins/doctrine/lib/Doctrine/Connection/UnitOfWork.php on line 932 When I echo $store_id, it seems to be grabbing the last id without any issues. Any idea why this error keeps coming up even though the ID is passing correctly?

    Read the article

  • Silverlight 4 HttpWebRequest user agent string is null

    - by Dan B
    The problem I have a page with a silverlight object. It attempts to retrieve XML from the page. But I am struggling with a security exception. I have this code working brilliantly in WPF. When using a website hosting a silverlight application with the same code, the user agent string of the HttpRequest object is null (and seemingly cannot be set). In fact there is no header information at all - this causes a security exception when attempting to make my asynchronous call. The question Why is the user-agent string (and header information) null in my silverlight 4 application when making an asynchronous call using HttpWebRequest? Thanks in advance!

    Read the article

  • How to use JQuery UI datepicker with bgIframe on IE 6

    - by Steve Davies
    I am trying to use the JQuery UI datepicker (latest stable version 1.5.2) on an IE6 website. But I am having the usual problems with combo boxes (selects) on IE6 where they float above other controls. I have tried adding the bgIframe plugin after declaring the datepicker with no luck. My guess is that the .ui-datepicker-div to which I am attaching the bgIframe doesn't exist until the calendar is shown. I am wondering if I can put the .bgIframe() command directly into the datepicker .js file and if so, where? (the similar control by kelvin Luck uses this approach) Current code $(".DateItem").datepicker({ showOn:"button", ... etc ... }); $(".ui-datepicker-div").bgIframe();

    Read the article

  • jquery cycle "allowPagerClickBubble: true" not quite working

    - by Shelagh
    Hi all, I want my page anchors to work as menu links so I used this code: $(function() { $('#slideshow').cycle({ slideExpr: 'img', fx: 'fade', speed: 2000, timeout: 4000, pager: '#nav', pagerEvent: 'mouseover', pauseOnPagerHover: true, pagerAnchorBuilder: function(idx, slide) { // return sel string for existing anchor return '#nav li:eq(' + (idx) + ') a'; }, allowPagerClickBubble: true }); }); If you click the right mouse button, you can choose to open the links and they do open just fine but a normal left button click does nothing. The thing is, they WERE working so I went to work on supposedly unrelated parts of the website and at some point the left mouse button click stopped working. Any ideas what I might have done? Left button clicking still works for everything else on the site so its not some global setting. The cycle is here if my explanation is not clear: http://www.mcguirenaturals.ca/index.php Thanks for any help

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >