Search Results

Search found 6362 results on 255 pages for 'django urls'.

Page 200/255 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Yum through http proxy

    - by eodchop
    I have several Fedora 13 servers that have to connect through an http proxy for yum updates. All port 80 traffic has to be routed through this proxy. I have setup the proxy server in the network settings GUI. I can browse the internet just fine. I have also setup my proxy information in /etc/yum.conf as follows: proxy=http:proxy.largecorp.corp/accelerated_pac_base.pac proxy_user=user proxy_password=password I then added the export HTTP_PROXY="http:proxy.largecorp.corp/accelerated_pac_base.pac" to /etc/bashrc and sourced the file. When i run yum update: Loaded plugins:presto, refresh-packagekit Error: Cannot retrieve repository metadata (repomd.xml) fro repository: fedora. Please verify its path and try again. All of the repo urls are the defaults, as this is a fresh install.

    Read the article

  • Is there any way to automatically prevent running out of memory?

    - by NoahY
    I am often running out of memory on my VPS ubuntu server. I wish there was a way to simply restart apache2 when it starts running out of memory, as that seems to solve the problem. Or am I just too lazy to fix the problem? I do have limited memory on the server... Okay, more information: I'm running apache2 prefork, here are my memory settings (i've been tweaking them...): StartServers 3 MinSpareServers 1 MaxSpareServers 5 MaxClients 150 MaxRequestsPerChild 1000 The VPS has 1 GB of ram, running ubuntu 11.04 32-bit. As for scripts, I have a wordpress network with 5 blogs, an install of AskBot (a python/django stackexchange clone), and an install of MediaWiki that isn't really used. There is also a homebrewed mp3 script that accesses the getid3 library to display information on lists of podcasts, and it seems to be throwing some php errors, not sure if that's the culprit...

    Read the article

  • freebsd dev server on virtualbox over windows

    - by g_kaya
    I need a unixy environment for development purposes. I hate doing things on windows but it is more stable for daily use and I don't have a mac, so I'm having to use windows (7). I want to run freebsd in a virtual machine, configure it to be the localhost server, be able to connect using ssh (within my home-network) and be able to install vbox guest addons. If guest additions aren't the best, I can use solaris or linux flavours. I need no gui. I don't know anything about network stuff, so I need a detailed explanation from vise people here, or a nice doc to read. Edit : To be more specific as requested, I use following on unices: *django 1.4 *apache *python (2.7) *emacs *mysql *probably node.js *bash scripting I use windows to be able to do daily things easily, like connecting to my tablet, browsing and learning java. And I don't want to use linux as my desktop os, beacuse it gets broken a lot, it's annoying to maintain wlan problems and some more.

    Read the article

  • Apache httpd + FreeTDS hangs until restarted

    - by Jordan Reiter
    Every so often requests to a Linux server (say, linux.example.org) where the web app (Django) pulls in data from a SQL Server database via FreeTDS will hang. Requests on other servers pointing to the database still work, as do requests on linux.example.org that use local MySQL databases. Only the server plus FreeTDS appear to be affected. Restarting httpd makes the database connections work correctly again. What could cause this problem? Using: Centos 5.9 freetds 0.91 Apache httpd 2.2.3 /etc/obdc.ini: [DSN] Description = SQL Server 2005 Driver = FreeTDS ;Database = dbname Servername = SERVERNAME ;TDS_Version = 8.0 /etc/freetds.conf: [SERVERNAME] driver = /usr/lib64/libtdsodbc.so host = db.example.org port = 1433 tds version = 8.0 client charset = UTF-8

    Read the article

  • Firewall blocks outgoing email

    - by Martin Trigaux
    On my Debian server running a Django website, I have an error when I need to send an email. The error received is Exception Type: gaierror Exception Value: [Errno -2] Name or service not known Exception Location: /usr/lib/python2.6/socket.py in create_connection, line 547 You can see the full error log here. After testing, it seems it is my firewall that blocks the request. You can see my iptable file (/etc/init.d/firewall). I think the problem comes from the two commented lines that were supposed to accepts all established connections. When I uncomment them, I have an error iptables: No chain/target/match by that name. Thank you

    Read the article

  • Hyperlinks on images in PDF from Word 2010

    - by Bristol
    I've got a Word 2010 document that I'm trying to convert to a PDF with "Save As...", preserving hyperlinks. Something odd is going on: Hyperlinks on inline text, or images that are inline, work fine. Hyperlinks on images with layout "in front of" text don't work in the PDF, same for hyperlinked drawing shapes. What I'm trying to do is make a "clickmap" image by putting an image on the page and overlaying parts of it with transparent shapes that hyperlink to different URLs. This isn't working, and the transparency has nothing to do with it - hyperlinks in the PDF seem only to work on "in line with text" elements. Am I missing something, or is there a better way to do this?

    Read the article

  • Chrome Residual Redirect to Login Page

    - by Shadow503
    My college redirects people in the dorms to a login page when using an ethernet (or wifi) connection. I am now at home, and certain domains keep redirecting to this login page. I've tried running ipconfig /flushdns and I flushed the chrome's local dns cache as described here: How to clear/flush the DNS cache in Google Chrome?. Interestingly enough, while http://www.reddit.com redirects to the login page, http://www.reddit.com/r/funny works. Firefox works fine for both urls. Is there a way to fix this without deleting all of my cookies? Thanks!

    Read the article

  • Tweaking "Most visited sites" button in Firefox

    - by Mehper C. Palavuzlar
    It is sometimes practical to use "Most Visited" sites which stands on the left hand side of Firefox window. When I click on it, I can see maximum 10 URLs. At that point, I have 2 questions: Is it possible to increase the number of maximum most visited sites (say, 30)? Let's say example.com is one of the most visited domains. In the most visited list, there are other pages from this domain, like example.com/intro, example.com/info, example.com/help etc. So those sub-addresses are also in the list, but I just want to see max 1 (or maybe 2) pages from the same domain in the list. Is it possible to arrange the list this way?

    Read the article

  • How do I get `set show-all-if-ambiguous on` in my .inputrc to play nice with the Python interpreter?

    - by ysim
    I noticed that after I added the set show-all-if-ambiguous on line to my ~/.inputrc, whenever I pressed tab to indent a block, it would show me the bash Display all ... possibilities? (y or n) prompt, and leave me unable to indent the actual code. Is there any way to keep that line in my .inputrc but still have the tab key work as expected in the Python interpreter? This is in my VirtualBox Ubuntu 12.04 VM, if it matters. EDIT: Curiously, I now have a different issue with the Python shell that comes with Django -- when I press tab, I get Python tab completion, but only with one Tab press. I've opened a separate question here for it.

    Read the article

  • Redirect URL to a Tomcat webapp

    - by phs
    I have a Tomcat server with two webapps, app1 and app2 (the app part is really the same). Each app has an independent group of users. I would like the groups to be able to access their respective app using group1.domain.com/app and group2.domain.com/app URLs, meaning that the numbers should be hidden from the URL displayed in browser. I suppose there needs to be a mechanism that would return the correct app based on the group# part of the URL. I have a vague understanding of URL rewrites. Is there a way to do this with only Tomcat? Or do I need Apache HTTP server? I would rather not use Apache if possible, but have no problem going that way if necessary.

    Read the article

  • Which free open source CPanel and WHM alternatives do you recommend/use?

    - by Keyframe
    I have been using webmin for some time now, however I miss the elegance and ease of WHM/CPanel combo I've had on shared hosting (and later dedicated hosting) platform. Looking around the web, all I have found that is somewhat at the level of WHM/CPanel was webmin - but WHM/CPanel it is not. Since I'm using this only for our projects, it doesn't matter in the end really. However, we do put our new customers on our servers too, so some sort of CPanel might be an easier thing for them to cope with (mostly going about Email accounts stuff and such). Currently my stack is LAMP (CentOS and Ubuntu Server - several machines, probably ditching CentOS soon in favor of Ubuntu). There is a prospect of Python/Django instead of PHP, but it might take awhile.

    Read the article

  • Make Chrome always open PDFs itself

    - by jdm
    Hi, I'm looking for a way to make Google Chrome always open PDFs with its internal viewer when I click a link, as opposed to downloading it to the default location. It works with most URLs, but some servers set a special header to force the file to be downloaded ("Content-Disposition: attachment;", e.g. http://www.uni-goettingen.de/en/46260.html). What I want is the opposite of this question: Stop PDFs from displaying inside Google Chrome, or what is asked for here, but applied to Chrome: How to ignore “Content-Disposition: attachment” in Firefox Btw., I'm running Chrome 8.0.552.0 dev on Ubuntu 10.4.

    Read the article

  • Windows/global setting to allow only SSL when on public Wifi?

    - by hungry
    Rather than going through each of my apps and modifying settings, or tweaking individual browser settings (I use three different browsers) or just being careful not to type non-SSL URLs into the web address bar, is there a solution at the Windows level that will prevent anything from connecting to the web from my laptop unless it's using SSL? I also have mini apps installed like Gmail checker, etc that connect to the web of their own volition using my usernames, passwords and such, so it goes beyond just web browsers. The reason I'm asking is I want to work securely on the general Internet when on public Wifi (e.g. coffee shops) without a lot of hassle or having to remember everything that needs to be locked down. When I'm back home I want to go back to full access mode using any kind of protocol on the web. If a website doesn't support SSL when I'm out in public then I just don't surf it - that's not a worry to me.

    Read the article

  • Running a cronjob

    - by Ed01
    've been puzzling over cronjobs for the last few hours. I've read documentation and examples. I understand the basics and concepts, but haven't gotten anything to work. So I would appreciate some help with this total noob dilemma. The ultimate goal is to schedule the execution of a django function every day. Before I get that far, I want to know that I can schedule any old script to run, first once, then on a regular basis. So I want to: 1) Write a simple script (perhaps a bash script) that will allow me to determine that yes, it did indeed run successfully, or that it failed. 2) schedule this script to run at the top of the hour I tried writing a bash script that simple output some text to the terminal: #!/bin/bash echo "The script ran" Then I dropped this into a .txt file MAILTO = *****.******@gmail.com 05 * * * * /home/vadmin/development/test.sh But nothing happened. I'm sure I did many things wrong. Where do I start to fix all of this?

    Read the article

  • Why is My Google Chat get Blocked (by corporate firewall) somedays but not others? [closed]

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • How do you host images using Windows Server so that they are accessible over the internet? [closed]

    - by nairware
    I was trying to figure out a way to host images (picture images, not disk images) such that they are accessible over the internet via URLs--in a way similar to a web service like Photobucket or ImageShack. I have a whole bunch of Windows Servers (Windows Server 2008 R2) available in the cloud. Instead of hosting images using Photobucket or ImageShack, I wanted to host this images directly on my own Windows cloud. This could be really complicated or really simple. I have no idea, as I know very little about IIS 6 (which is what I am using) or web servers. If this is too broad of a question (as there are probably multiple ways of implementing this), is there at least some guide or documentation of how someone else has setup image hosting? Perhaps a step-by-step guide of at least one way to do it?

    Read the article

  • why does my google chat get blocked (by corporate firewall) somedays but not others?

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • Restarting nginx backends without losing requests

    - by Oli
    I'm sure it's been asked before in different words but I run several Django sites via uwsgi (emporer mode) behind nginx. It's all a fairly standard configuration but I find that if I restart the central uwsgi process, nginx just bombs out 502s rather than waiting for the socket to become available. I recognise that most of this is probably for a reason but people seeing 502 errors really stings me. It's certainly not something I want a client to see. So... Can I beg nginx to wait/retry backends? Or, Is there anything (other than the obvious) I can do to minimise commercial damage from uwsgi restarts?

    Read the article

  • Fetching data (responsebody) with a HttpClient in an AsyncTask and returning the data outside the As

    - by Peter Warbo
    Basically I'm wondering how I'm able to do what I've written in the topic. I've looked through many tutorials on AsyncTask but I can't get it to work. I have a little form (EditText) that will take what the user inputs there and make it to a url query for the application to lookup and then display the results. What I think would seem to work is something like this: In my main activity i have a string called responseBody. Then the user clicks on the search button it will go to my search function and from there call the GrabUrl method with the url which will start the asyncdata and when that process is finished the onPostExecute method will use the function activity.this.setResponseBody(content). This is what my code looks like simpliefied with the most important parts (I think). public class activity extends Activity { private String responseBody; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); initControls(); } public void initControls() { fieldSearch = (EditText) findViewById(R.id.EditText01); buttonSearch = (Button)findViewById(R.id.Button01); buttonSearch.setOnClickListener(new Button.OnClickListener() { public void onClick (View v){ search(); }}); } public void grabURL(String url) { new GrabURL().execute(url); } private class GrabURL extends AsyncTask<String, Void, String> { private final HttpClient client = new DefaultHttpClient(); private String content; private boolean error = false; private ProgressDialog dialog = new ProgressDialog(activity.this); protected void onPreExecute() { dialog.setMessage("Getting your data... Please wait..."); dialog.show(); } protected String doInBackground(String... urls) { try { HttpGet httpget = new HttpGet(urls[0]); ResponseHandler<String> responseHandler = new BasicResponseHandler(); content = client.execute(httpget, responseHandler); } catch (ClientProtocolException e) { error = true; cancel(true); } catch (IOException e) { error = true; cancel(true); } return content; } protected void onPostExecute(String content) { dialog.dismiss(); if (error) { Toast toast = Toast.makeText(activity.this, getString(R.string.offline), Toast.LENGTH_LONG); toast.setGravity(Gravity.TOP, 0, 75); toast.show(); } else { activity.this.setResponseBody(content); } } } public void search() { String query = fieldSearch.getText().toString(); String url = "http://example.com/example.php?query=" + query; //this is just an example url, I have a "real" url in my application but for privacy reasons I've replaced it grabURL(url); // the method that will start the asynctask processData(responseBody); // process the responseBody and display stuff on the ui-thread with the data that I would like to get from the asyntask but doesn't obviously }

    Read the article

  • Why is my code returning a Null Object Refrence error when using WatIn?

    - by Fuzz Evans
    I keep getting a Null Object Refrence Error, but can't tell why. I have a CSV file that contains 100 urls. The file is read into an array called "lines". public partial class Form1 : System.Windows.Forms.Form { string[] lines; public Form1() ... private void ReadLinksIntoMemory() { //this reads the chosen csv file into our "lines" array //and splits on comma's and new lines to create new objects within the array using (StreamReader sr = new StreamReader(@"C:\temp.csv")) { //reads everything in our csv into 1 long line string fileContents = sr.ReadToEnd(); //splits the 1 long line read in into multiple objects of the lines array lines = fileContents.Split(new string[] { ",", Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); sr.Dispose(); } } The next part is where I get the null object error. When I try to use WatIn to go to the first item in the lines array it says I'm referencing a null object. private void GoToEditLinks() { for (int i = 0; i < lines.Length; i++) { //go to each link sequentially myIE.GoTo(lines[i].ToString()); //sleep so we can make sure the page loads System.Threading.Thread.Sleep(5000); } } When I debug the code it says that the GoTo request calls lines which is null. It seems like I need to declare the array, but don't I need to tell it an exact size to do that? Example: lines = new string[10] I thought I could use the lines.Length to tell it how big to make the array but that didn't work. What is weird to me is I can use the following code without problem: //returns the accurate number of urls that were in the CSV we read in earlier txtbx1.text = lines.Length; //or //this returns the last entry in the csv, as well as the last entry in the array TextBox2.Text = lines[lines.Length - 1]; I am confused why the array clearly has items in it, they can be called to fill a text box, but when I try to call them in my for loop it says its a null reference? UPDATE: By placing my cursor on both calls to lines and pressing f12 I find they both go to the same instance. The thought next is that I am not calling ReadLinksIntoMemory in time, below is my code: private void button1_Click(object sender, EventArgs e) { button1.Enabled = false; ReadLinksIntoMemory(); GoToEditLinks(); button1.Enabled = true; } Unless I'm mistaken the code says that the ReadLinksIntoMemory method must complete before GoToEditLinks can be called? If ReadLinksIntoMemory didn't finish in time I shouldn't be able to fill my text boxes with the lines array length and/or last entry. UPDATE: Stepping into the method GoToEditLinks() I see that lines is null before it calls: myIE.GoTo(lines[i]); but when it hits the goto command the value changes from null to the url it is suppose to go to, but at that same time it gives me the null object error? UPDATE: I added a IsNullOrEmpty check method and lines array passes it without any issue. I'm beginning to think it is an issue with WatIn and the myIE.GoTo command. I think this is the stack trace/call stack? Program.exe!Program.Form1.GoToEditLinks() Line 284 C# Program.exe!Program.Form1.button1_Click(object sender, System.EventArgs e) Line 191 + 0x8 bytes C# [External Code] Program.exe!Program.Program.Main() Line 18 + 0x1d bytes C# [External Code]

    Read the article

  • Unable to post via HTTP POST

    - by jihbvsdfu
    i am trying to post data via HTTP Post using name value key pair. But I am unable to post . The post url is http://mastercp.openweb.co.za/api/dbg_dump.asp .Should I include some header also while posting? Thanks public class MainActivity extends Activity { Button ok; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.profile); ok=(Button)findViewById(R.id.but_signup_login); ok.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { System.out.println("Clicked"); DownloadWebPageTask task = new DownloadWebPageTask(); task.execute(new String[] { "http://mastercp.openweb.co.za/api/dbg_dump.asp" });}}); } public void postData() { // Create a new HttpClient and Post Header HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://mastercp.openweb.co.za/api/dbg_dump.asp"); System.out.println("Clicked again"); try { // Add your data List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(34); String amount ="Ashish"; nameValuePairs.add(new BasicNameValuePair("User_Type", amount)); nameValuePairs.add(new BasicNameValuePair("User_Email", "[email protected]")); nameValuePairs.add(new BasicNameValuePair("User_Email_In", amount)); nameValuePairs.add(new BasicNameValuePair("User_Pass", amount)); nameValuePairs.add(new BasicNameValuePair("User_Mobile", amount)); nameValuePairs.add(new BasicNameValuePair("User_Mobile_In", amount)); nameValuePairs.add(new BasicNameValuePair("User_ADSL", amount)); nameValuePairs.add(new BasicNameValuePair("User_Org", amount)); nameValuePairs.add(new BasicNameValuePair("User_VAT", amount)); nameValuePairs.add(new BasicNameValuePair("User_Name", amount)); nameValuePairs.add(new BasicNameValuePair("User_Surname", amount)); nameValuePairs.add(new BasicNameValuePair("User_RegNo", amount)); nameValuePairs.add(new BasicNameValuePair("User_Address", amount)); nameValuePairs.add(new BasicNameValuePair("User_Town", amount)); nameValuePairs.add(new BasicNameValuePair("User_Code", amount)); nameValuePairs.add(new BasicNameValuePair("User_State", amount)); nameValuePairs.add(new BasicNameValuePair("User_Country", amount)); nameValuePairs.add(new BasicNameValuePair("User_ADSL", amount)); nameValuePairs.add(new BasicNameValuePair("User_ADSL_Address", amount)); nameValuePairs.add(new BasicNameValuePair("Payment_CC_Alt", amount)); nameValuePairs.add(new BasicNameValuePair("Payment_Type", amount)); nameValuePairs.add(new BasicNameValuePair("CProfile", amount)); nameValuePairs.add(new BasicNameValuePair("COrder", amount)); nameValuePairs.add(new BasicNameValuePair("Debit_Name", amount)); nameValuePairs.add(new BasicNameValuePair("Debit_Bank", amount)); nameValuePairs.add(new BasicNameValuePair("Debit_Number", amount)); nameValuePairs.add(new BasicNameValuePair("Debit_Code", amount)); nameValuePairs.add(new BasicNameValuePair("Debit_Type", amount)); nameValuePairs.add(new BasicNameValuePair("TOS_Agree", amount)); nameValuePairs.add(new BasicNameValuePair("Code", amount)); nameValuePairs.add(new BasicNameValuePair("package_activation", amount)); nameValuePairs.add(new BasicNameValuePair("session", amount)); nameValuePairs.add(new BasicNameValuePair("OnceOff", amount)); nameValuePairs.add(new BasicNameValuePair("submit-button", amount)); try { httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); } catch (UnsupportedEncodingException e) { System.out.println("Unsupported Exception "+e); e.printStackTrace(); } } catch (Exception e) { System.out.println(" Exception last"+e); // TODO Auto-generated catch block } } private class DownloadWebPageTask extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... urls) { String response = ""; for (String url : urls) { postData(); } return response; } @Override protected void onPostExecute(String result) {} } }

    Read the article

  • Can this be improved? Scrubbing of dangerous html tags.

    - by chobo2
    I been finding that for something that I consider pretty import there is very little information or libraries on how to deal with this problem. I found this while searching. I really don't know all the million ways that a hacker could try to insert the dangerous tags. I have a rich html editor so I need to keep non dangerous tags but strip out bad ones. So is this script missing anything? It uses html agility pack. public string ScrubHTML(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); //Remove potentially harmful elements HtmlNodeCollection nc = doc.DocumentNode.SelectNodes("//script|//link|//iframe|//frameset|//frame|//applet|//object|//embed"); if (nc != null) { foreach (HtmlNode node in nc) { node.ParentNode.RemoveChild(node, false); } } //remove hrefs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("href", "#"); } } //remove img with refs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("src", "#"); } } //remove on<Event> handlers from all tags nc = doc.DocumentNode.SelectNodes("//*[@onclick or @onmouseover or @onfocus or @onblur or @onmouseout or @ondoubleclick or @onload or @onunload]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("onFocus"); node.Attributes.Remove("onBlur"); node.Attributes.Remove("onClick"); node.Attributes.Remove("onMouseOver"); node.Attributes.Remove("onMouseOut"); node.Attributes.Remove("onDoubleClick"); node.Attributes.Remove("onLoad"); node.Attributes.Remove("onUnload"); } } // remove any style attributes that contain the word expression (IE evaluates this as script) nc = doc.DocumentNode.SelectNodes("//*[contains(translate(@style, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'expression')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("stYle"); } } return doc.DocumentNode.WriteTo(); } Edit 2 people have suggested whitelisting. I actually like the idea of whitelisting but never actually did it because no one can actually tell me how to do it in C# and I can't even really find tutorials for how to do it in c#(the last time I looked. I will check it out again). How do you make a white list? Is it just a list collection? How do you actual parse out all html tags, script tags and every other tag? Once you have the tags how do you determine which ones are allowed? Compare them to you list collection? But what happens if the content is coming in and has like 100 tags and you have 50 allowed. You got to compare each of those 100 tag by 50 allowed tags. Thats quite a bit to go through and could be slow. Once you found a invalid tag how do you remove it? I don't really want to reject a whole set of text if one tag was found to be invalid. I rather remove and insert the rest. Should I be using html agility pack?

    Read the article

  • Why is parsing a jSON response to a jQUERY ajax request not working

    - by Ankur
    I know there are several similar questions on SO, I have used some of them to get this far. I am trying to list a set of urls that match my input values. I have a servlet which takes some input e.g. "aus" in the example below and returns some output using out.print() e.g. the two urls I have shown below. EXAMPLE Input: "aus" Output: [{"url":"http://dbpedia.org/resource/Stifel_Nicolaus"},{"url":"http://sempedia.org/ontology/object/Australia"}] Which is exactly what I want. I have seen that firebug doesn't seem to have anything in the response section despite having called out.print(jsonString); and it seems that out.print(jsonString); is working as expected which suggests that the variable 'jsonString' contains the expected values. However I am not exactly sure what is wrong. -------- The jQuery --------- $(document).ready(function() { $("#input").keyup(function() { var input = $("#input").val(); //$("#output").html(input); ajaxCall(input); }); }); function ajaxCall(input) { // alert(input); $.ajax({ url: "InstantSearchServlet", data: "property="+input, beforeSend: function(x) { if(x && x.overrideMimeType) { x.overrideMimeType("application/j-son;charset=UTF-8"); } }, dataType: "json", success: function(data) { for (var i = 0, len = datalength; i < len; ++i) { var urlData = data[i]; $("#output").html(urlData.url); } } }); } ------ The Servlet that calls the DAO class - and returns the results ------- public class InstantSearchServlet extends HttpServlet{ private static final long serialVersionUID = 1L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { System.out.println("You called?"); response.setContentType("application/json"); PrintWriter out = response.getWriter(); InstantSearch is = new InstantSearch(); String input = (String)request.getParameter("property"); System.out.println(input); try { ArrayList<String> urllist; urllist = is.getUrls(input); String jsonString = convertToJSON(urllist); out.print(jsonString); System.out.println(jsonString); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private String convertToJSON(ArrayList<String> urllist) { Iterator<String> itr = urllist.iterator(); JSONArray jArray = new JSONArray(); int i = 0; while (itr.hasNext()) { i++; JSONObject json = new JSONObject(); String url = itr.next(); try { json.put("url",url); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } jArray.put(json); } String results = jArray.toString(); return results; } }

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >