Search Results

Search found 5346 results on 214 pages for 'sender rewriting'.

Page 67/214 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Browser History ASP.Net AJAX: Microsoft.Web.Preview

    - by Narendra Tiwari
    I remember in 2006 we were working on a portal for our client Venetian, Las Vegas and the portal is full of AJAX features. One of my friend facing a challange to retain browser history with all AJAX operation. In terms of user experience it is an important aspect which could not be avoided in that scenario. Well that time we have made some workarounds to achieve the same but that may not be the perfect solution. Ok.. Now with Microsoft AJAX there are a lot of such features can be achieved with optimum efficiency. Microsoft AJAX has grown its features over the past few years. Microsoft.Web.Preview.dll is an addon in conjunction with ASP.Net AJAX. It contains a control named "History" for that purpose. Source code:- http://download.microsoft.com/download/8/3/1/831ffcd7-c571-4075-b8fa-6ff678794f60/CS-ASP-ASPBrowserHistoryinAJAX_cs.zip Below is a small sample to demonstrate the control. 1/ Get dll from the above source code bin, and add reference to your web application. 2/ Rightclick on toolbox panel and Choose Item, browse assembly. now you will be able to see History control. 3/ Add below section group in web.config under <configSections> <sectionGroup name="microsoft.web.preview" type="Microsoft.Web.Preview.Configuration.PreviewSectionGroup, Microsoft.Web.Preview"> <section name="search" type="Microsoft.Web.Preview.Configuration.SearchSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="searchSiteMap" type="Microsoft.Web.Preview.Configuration.SearchSiteMapSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="diagnostics" type="Microsoft.Web.Preview.Configuration.DiagnosticsSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> 4/ Now create a simple webpage a textbox (txt1), button (btn1)  in an updatePanel with History control (History1). We will fill in text box and post the fom by clicking button a few times then verify if the browse history is retained. Remember button and textbox must be inside UpdatePanel and History control outside the UpdatePanel. <%@Page Language="C#" AutoEventWireup="true" CodeFile="History.aspx.cs" Inherits="History" %> <%@ Register Assembly="Microsoft.Web.Preview" Namespace="Microsoft.Web.Preview.UI.Controls" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true"></asp:ScriptManager> <div> <cc1:History ID="History1" runat="server" OnNavigate="History1_Navigate"> </cc1:History> <asp:UpdatePanel ID="up1" runat="server"> <ContentTemplate> <asp:TextBox ID="txt1" runat="server"></asp:TextBox><br /> <asp:Button ID="btn1" runat="server" Text="Test" OnClick="btn1_Click" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="History1" /> </Triggers> </asp:UpdatePanel> </div> </form> </body> </html> 5/ Below code to add the textbox value in history everytime we post back using btn1 click.  protected void btn1_Click(object sender, EventArgs e) { History1.AddHistoryPoint("txtState",txt1.Text); } 6/ and finally Navigate event of History control protected void History1_Navigate(object sender, Microsoft.Web.Preview.UI.Controls.HistoryEventArgs args) { string strState = string.Empty; if (args.State.ContainsKey("txtState")) { strState = (string)args.State["txtState"]; } txt1.Text = strState; } Now all set to go :) Reference: http://www.dotnetglobe.com/2008/08/using-asp.html

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

  • ISPConfig 3 SSL automatic rewrite

    - by lol
    I was wondering how you could get apache2 to redirect http://server.com:8080 to https://server.com:8080 - I have an ISPConfig 3 setup and the http://server.com:8080 virtual host currently prints a 400 back request error given that I've tried adding RewriteEngine on RewriteCond %{HTTPS} !^on$ [NC] RewriteRule . https://%{HTTP_HOST}:8080%{REQUEST_URI} [L] to the ispconfig.vhost file (and reloading the conf) with no success --edit!-- I've been playing around with it and adding an 'always redirect to google' into the ispconfig vhost and it works once you've already started talking ssl to it. this means the non-ssl connections are getting 'bad request errors' before the vhost is loaded... but where...? --edit 2!-- nope, the ssl is handled exclusively by the virtual host - if I turn off the ssl engine then the rewriting works perfectly (but obviously there is no ssl at https://) thanks!

    Read the article

  • How to check for the existence of a response header in Nginx rules?

    - by Victor Welling
    Setting up the rewriting rules for the request proved to be quite easy in Nginx. For the response, not so much (at least, not for me). I want to strip the Content-Type header from the response if the Content-Length header of the response isn't set. I have the NginxHttpHeadersMoreModule installed, so that should allow me to remove the header, but I can't seem to find a way to check for the existence of the Content-Length header of the response using a rule in Nginx's configuration. Any suggestions on how to do this would be most appreciated!

    Read the article

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • passing URL vars to a wordpress page and pretty-fying it with .htaccess

    - by Jonah
    I have wordpress installed in a directory called welcome, and /welcome/samples is a "page" (created via Wordpress). It's has a php template waiting for a $_REQUEST['category'] When a user goes to /welcome/samples/fun, I want to have "fun" passed to the samples php template in the form welcome/samples/?category=fun But I want the URL to remain in its original form - it's currently replacing the it with the ugly "?cat...etc" # Outside the wordpress block so it won't be overwritten Options +FollowSymlinks RewriteEngine On RewriteRule ^samples/([^/]+)$ /welcome/samples?cat=$1 [R,L] # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /welcome/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /welcome/index.php [L] </IfModule> # END WordPress I tried Rewriting with simply samples?cat=$1 but I was getting a 404. I tried putting in the RewriteBase /welcome/ in the first block. without the [R] flag it doesn't work at all. I keep trying different permutations... and failing:( Perhaps I'm missing some basic concepts... thanks if you take the time to even read through this:) ciao

    Read the article

  • Custom 403 Error page not showing

    - by Rahul Sekhar
    I want to restrict access to certain folders (includes, xml and logs for example) and so I've given them 700 permissions, and all files within them 600 permissions. Firstly, is this the right approach to restrict access? I have a .htaccess file in my root that handles rewriting and error documents. There are two pages in the root - 403.php and 404.php - for 403 and 404 errors. And I have these rules added to my .htaccess file: ErrorDocument 404 /404.php ErrorDocument 403 /403.php Now, the 404 page works just fine. The 403 page does not show when I try to access the 'includes' folder - I get the standard apache 403 error page instead, saying 'Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.' However, when I try going to the .htaccess file (in the web root) in my browser, I get my custom 403 error page. Why is this happening?

    Read the article

  • Packages not showing up in created APT repository

    - by David
    I created an APT repository using deb-scanpackages, and it seemed to go well. When I did a apt-get update on another server, the Packages.gz file was retrieved, and all seemed well - until I went to search for the packages contained in that repository (all packages are created locally). Several recommendations suggested reprepro; I tried that. Same result - except I had to rebuild the packages with the Priority and Section lines in the control file (nothing says this anywhere). The reprepro utility also generates a complicated directory structure which required rewriting the repository entry on the requesting server. I then found that the arch directory referenced i386 and not amd64 (which was requested by the requesting server). Is it possible that the AMD64 system isn't seeing packages compiled for i386? Searching the *Packages files in /var/lib/apt/lists show that the only packages for i386 are those I added (the other files are for the server - Ubuntu 10.04.2 LTS). The server the packages were built on is Ubuntu 10.04.2 LTS i686; the requesting server is x86_64. I found some discussion at the Debian AMD64FAQ but it claims to be obsolete. It makes mention of an extended syntax for repository listings for APT, and a command dpkg-subarchitecture - neither of which work on the local AMD64 server. Do I have to build two different sets of packages?

    Read the article

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • Apache mod_proxy_html Substitute: how to re-use part of regex match? (regex variables?)

    - by goober
    Hi all, Have a unique URL-rewriting situation in Apache. I need to be able to take a URL that starts with "\u002f[X]" or '\u002f[X]" Where X is the rest of some URL, and substitute the text "\u002fmeis2\u002f[X] I'm not sure how the Regex works in Apache -- I think it's the same as Perl 5? -- but even then I'm a little unsure how this would be done. My hunch is that it has to do with Regex grouping and then using $1 to pull the variable out, but I'm entirely unfamiliar with this process in Apache. Hoping someone can help -- thanks!

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Suddely internet is not accessible

    - by user189708
    I am going crazy here. One day everything was working fine. I turned pc off and went to sleep. Next day turn pc on and cannot access internet (from any browser). The situation is: I cannot open any webpage from browser (tried Firefox and Epiphany) and cannot receive emails in thunderbird. BUT if I run firefox from console as sudo, I can use it as usual. I can access Skype and pretty much any other network stuff (like installing software with apt-get etc.), also if I use Astrill VPN software I can access webpages even running without sudo. I haven't install any software or anything like that for several days = I have not a clue what could cause this. Just by the way, other Win PC in our home has no issue. Here is what I have tried to fix this: I have tried to restart my pc, router, modem - multiple times I have tried to change permissions to my firefox profile I have tried to completely re-install firefox and start with blank profile, thus no addons I have tried to change /etc/resolv.conf to an IP of my router (it was 127.0.1.1) I have tried to change my hostname (from tomino-NB to tominoNB) I think I might try even more stuff. None of it works. Can someone please try to help me. Thank you UPDATE 1: I have tried this: Removing resolv.conf - Didn't help Also "ping" and "dig" commands cannot resolve host UPDATE 2: I have tried to edit nameservers in resolv.conf but still no effect. I can ping router as well as I can ping outside IP. So definitely just some DNS issue. Is it possible that something is rewriting path to resolv.conf and using different file? UPDATE 3: I have just restarted PC and everything works now... resolv.conf went back to nameserver 127.0.1.1 . I have no clue what happened that it works again...

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • sSMTP Configuration Question

    - by SevenCentral
    I've installed sSMTP on Ubuntu 10.04 via: sudo apt-get install ssmtp My configuration file is: # # Config file for sSMTP sendmail # # The person who gets all mail for userids < 1000 # Make this empty to disable rewriting. [email protected] # The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.com mailhub=smtp.gmail.com:587 # Where will the mail seem to come from? #rewriteDomain= # The full hostname hostname=somedomain.com # Are users allowed to set their own From: address? # YES - Allow the user to specify their own From: address # NO - Use the system generated From: address #FromLineOverride=YES [email protected] authpass=**** usestarttls=yes Am I transmitting my credentials in clear text? Is calling ssmtp a secure operation? Thanks.

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • Can I save an Apache environment variable value with SetEnv?

    - by Nicholas Tolley Cottrell
    I am running Apache 2.2 with Tomcat 6 and have several layers of URL rewriting going on in both Apache with RewriteRule and in Tomcat. I want to pass through the original REQUEST_URI that Apache sees so that I can log it properly for "page not found" errors etc. In httpd.conf I have a line: SetEnv ORIG_URL %{REQUEST_URI} and in the mod_jk.conf, I have: JkEnvVar ORIG_URL Which i thought should make the value available via request.getAttribute("ORIG_URL") in Servlets. However, all that I see is "%{REQUEST_URI}", so I assume that SetEnv doesn't interpret the %{...} syntax. What is the right way to get the URL the user requested in Tomcat?

    Read the article

  • How to Create SharePoint List and Insert List Item programmatically from a Windows Forms Application.

    - by Michael M. Bangoy
    In this post I’m going to demonstrate how to create SharePoint List and also Insert Items on the List from a Windows Forms Application. 1. Open Visual Studio and create a new project. On the project template select Windows Form Application under C#. 2. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI.  3. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 4. Open the Form1 in design view and from the Toolbox menu add a button on the form surface. Your form should look like the one below. 5. Double click the button to open the code view. Add Using statement to reference the Sharepoint Client Library then create method for the Create List. Your code should like the codes below. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "urlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void cmdcreate_Click(object sender, EventArgs e)         {             try             {                 // declare the ClientContext Object                 SP.ClientContext _clientcontext = new SP.ClientContext(_context);                 SP.Web _site = _clientcontext.Web;                 // declare a ListCreationInfo                 SP.ListCreationInformation _listcreationinfo = new SP.ListCreationInformation();                 // set the Title and the Template of the List to be created                 _listcreationinfo.Title = "NewListFromCOM";                 _listcreationinfo.TemplateType = (int)SP.ListTemplateType.GenericList;                 // Call the add method to the ListCreatedInfo                 SP.List _list = _site.Lists.Add(_listcreationinfo);                 // Add Description field to the List                 SP.Field _Description = _list.Fields.AddFieldAsXml(@"                                     <Field Type='Text'                                         DisplayName='Description'>                                     </Field>", true, SP.AddFieldOptions.AddToDefaultContentType);                 // declare the List item Creation object for creating List Item                 SP.ListItemCreationInformation _itemcreationinfo = new SP.ListItemCreationInformation();                 // call the additem method of the list to insert a new List Item                 SP.ListItem _item = _list.AddItem(_itemcreationinfo);                 _item["Title"] = "New Item from Client Object Model";                 _item["Description"] = "This item was added by a Windows Forms Application";                 // call the update method                 _item.Update();                 // execute the query of the clientcontext                 _clientcontext.ExecuteQuery();                 // dispose the clientcontext                 _clientcontext.Dispose();                 MessageBox.Show("List Creation Successfull");             }             catch(Exception ex)             {                 MessageBox.Show("Error creating list" + ex.ToString());             }          }     } } 6. Hit F5 to run the application. A message will be displayed on the screen if the operation is successful and also if it fails. 7. To make that the operation of our Windows Form Application has really created the List and Inserted an item on it. Let’s open our SharePoint site. Once the SharePoint is open click on the Site Actions then View All Site Content. 7. Click the List to open it and check if an Item is inserted. That’s it. Hope this helps.

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • Change Envelope From to match From header in Postfix

    - by lid
    I am using Postfix as a gateway for my domain and need it to change or rewrite the Envelope From address to match the From header. For example, the From: header is "[email protected]" and the Envelope From is "[email protected]". I want Postfix to make the Envelope From "[email protected]" before relaying it on. I took a look at the Postfix Address Rewriting document but couldn't find anything that matched my use case. (In case you're curious why I need to do this: Gmail uses the same Envelope From when sending from a particular account, no matter which From: address you choose to use. I would prefer not to disclose the account being used to send the email. Also, it messes with SPF/DMARC domain alignment - see 4.2.2 of the DMARC draft spec.)

    Read the article

  • Mod_rewrite pretty url when domain/foo is a directory

    - by ModRewriter
    Starting with something as simple as: RewriteRule ^(.*)$ index.php?page=$1 What if I also want the following to work: RewriteRule ^/foo$ /index.php?page=foo #/foo IS a directory This seem to work ONLY if the R flag is set, but then the full non-pretty url is written. Thus it seems I can REDIRECT existing directory, but not rewrite them... Maybe with an .htaccess inside the directory itself? Or some PHP magic in /foo/index.php like header(/index.php?page=foo)? Will it work? Will it be HTTP standard/search engine optimized? Please help! PS: The oddest idea occurred to me: redirecting /foo to /not-a-dir, and then rewriting /not-a-dir to /index.php?p=foo should theorically work... But... Come on... Really?!?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >