Search Results

Search found 4618 results on 185 pages for 'websites'.

Page 138/185 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • jQuery/JSONP widget and jquery version conflict

    - by geraud
    I would like to create a widget so that my visitors can display it on their blog/website. I would like to use jquery and jsonp to develop this widget. I know how to avoid conflicts between jQuery and other libraries (like prototype). But what will happen if jquery is already installed on my visitors' websites and if their version is different from my version ? What I mean is: what if, for example, they run a script using an older jquery version and which is not compatible with my jquery library ? Does it stop working ? Is their any workaround ?

    Read the article

  • Question about "ASP.NET 3.5 Social Networking" by Andrew Siemer (from Packt Publishing)

    - by user287745
    am currently reading a book, which has explanation of making a social website. ASP.NET 3.5 Social Networking https://www.packtpub.com/expert-guide-for-social-networking-with-asp-.net-3.5/book On page 41 I noticed that the images of the solution explorer given in the text, indicate that windowsformapplication[PROJECT] has been used instead of WebForms[create new website]. there are no webforms? how would the end result be a site? what is happening here?? the name of the book is, ASP.NET 3.5 Social Networking, please refer to page 41, thanks note:- i have always made websites which needs hosting and be accessible from other computers using webforms[create new website] which has web.config file app_data etc..... please help thank you.

    Read the article

  • page posting issue when working in Screen Scraping

    - by Muhammad Akhtar
    Hi, I am working on screen scraping and done successfully in 3 websites, I have an issue in last website here is my url, When I hit with my parameter, it is showing result on next page, simply posting to other page and showing the result fine on other page Here is My Test However, when I hit from my application, since here I don't have an option to post, it only fetch html of requested page that is obviously my above mention HTML test link, that actually have parameter in URL to get the result. How can I handle this situtation? Please give me hint. Thanks here is my C# code, I am using HTMLAgality String url; HtmlWeb hw = new HtmlWeb(); HtmlDocument doc; url = "http://mysampleURL"; doc = hw.Load(url);

    Read the article

  • Question about the MIT License and open/closed source

    - by stck777
    Hallo, I have a small useful application (Websites managment tool developed in Java) which I want to publish online for free (every one can use it for free). But I do not want to make it opne source yet. It is additional affort to clean up the code, documantation and so on... And this is just a small useful tool. Can I use the MIT license in this case, or the MIT license obligates me to distribute the source code? Which license is best in this case? Thanks.

    Read the article

  • Single Sign On with Forms Authentication

    - by Christo Fur
    I am trying to set up Single sign on for 2 websites that reside on the same domain e.g. http://mydomain (top level site that contains a forms-auth login page) http://mydomain/admin (seperately developed website residing in a Virtual Application within the parent website) Have read a few articles on Single Sign on e.g. http://www.codeproject.com/KB/aspnet/SingleSignon.aspx And they seem to suggest it is just a case of having the same machinekey section in each web.config so that the cookie encryprion and decryption is the same for each application I have set this up and I never get prompted for credentials in the sub-website (the virtual application) I always get prompted in the parent site. In addition to having the same machinekey I've also tried adding the same <authentication> and <authorisation> elements Any idea what I could be missing?

    Read the article

  • Getting BeautifulSoup to find a specific <p>

    - by Ryan
    I'm trying to put together a basic HTML scraper for a variety of scientific journal websites, specifically trying to get the abstract or introductory paragraph. The current journal I'm working on is Nature, and the article I've been using as my sample can be seen at http://www.nature.com/nature/journal/v463/n7284/abs/nature08715.html. I can't get the abstract out of that page, however. I'm searching for everything between the <p class="lead">...</p> tags, but I can't seem to figure out how to isolate them. I thought it would be something simple like from BeautifulSoup import BeautifulSoup import re import urllib2 address="http://www.nature.com/nature/journal/v463/n7284/full/nature08715.html" html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) abstract = soup.find('p', attrs={'class' : 'lead'}) print abstract Using Python 2.5, BeautifulSoup 3.0.8, running this returns 'None'. I have no option of using anything else that needs to be compiled/installed (like lxml). Is BeautifulSoup confused, or am I?

    Read the article

  • Access Internet From My Blackberry App

    - by Ankit
    Hi all, This is my first attemp to code a blackberry app so please bear with me. I am developing an app to make it easy to access certain information from certain websites using screenscrapping. Now I am done with the ui part of the application onto the internet access part. My question is how do i access internet from my app ? I see that blackberry offers http, wifi and some other forms to access the internet ... does my app need to be worried about what mode is being used ? or as far as my app is concerned theres a general api to access the net with the logic of connecting to the internet being handled by the device itself ..? any pointers with some sample code would be much appreciated. thank you, ankit

    Read the article

  • SphinxSearch or a spider - which one to choose?

    - by r2b2
    Hello, here is my problem: We own SiteA and SiteB and they share the same server and database where we have full control. SiteC , siteD and siteE are some of the sites we own as well but reside on a different web hosts. The goal is to create a unified search functionality for all of the sites mentioned above. That is if somebody search for a term in SiteA, the search result will automatically come up with results from SiteB,SiteC,SiteD and Site E too. The search results should be shown under the website they were found in. All these websites content are stored in their own databases. If I use SphinxSearch to index the above sites,I would then require those sites that we dont have complete control with to setup a web service where i can download a database dump or csv file for indexing. Im not quite sure about how a sphider will come into play here so need your opinion. Sphinx or a spider? THanks!

    Read the article

  • Randomly getting a 500 error on my website

    - by randylahey
    I am randomly getting a 500 error on my websites, all of which are hosted on a shared server. It won't happen all the time, just randomly when I will refresh the page I will get a 500 error. It usually will come back after I refresh a few times. I've been reading about it and have heard that .htaccess files can cause this... I did recently start using an .htaccess file to tell my server to start using php 5. I got the code straight from the hosting company that I am with. This is what is in the .htaccess file: AddType x-mapp-php5 .php If anyone has any ideas as to what would be causing this, that would really help. Thanks!

    Read the article

  • Google Charts Through CURL

    - by swt83
    I have a PHP class that helps me generate URLs for custom charts using Google Chart service. These URLs work fine when I load them in my browser, but I'm trying to pull them using CURL so I can access the charts on secure https websites. Whenever I try and pull a chart via CURL, I get an Error 400 Bad Request. Any idea on how to get around this? Everything I have tried has failed. $url = urldecode($_GET['url']); $session = curl_init($url); // Open the Curl session curl_setopt($session, CURLOPT_HEADER, false); // Don't return HTTP headers curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do return the contents of the call $image = curl_exec($session); // Make the call #header("Content-Type: image/png"); // Set the content type appropriately curl_close($session); // And close the session die($image);

    Read the article

  • Why isn't Google Web Toolkit more popular?

    - by gerdemb
    I've recently become intrigued with Google Web Toolkit and have started playing with it on some personal projects. I've noticed though that it doesn't seem to be very popular. For example, two major freelancing job boards (www.elance.com and www.odesk.com) list no jobs for GWT and the list of projects using it on Google's official site is pretty slim http://code.google.com/webtoolkit/app_gallery.html (compare to Django projects for example http://www.djangosites.org/). This seems odd to me as GWT has been around since 2006 and is supported by the Google brand name. It also neatly solves the problem of creating cross-browser completely dynamic websites that I haven't seen possible with any other tool. So, why the lack of acceptance?

    Read the article

  • "port forwarding": redirect calls to webservice at port 8081 to port 80

    - by niba
    Hi, a colleague of mine wrote a webservice that runs on port 8081 of our Windows 2008 Server. He uses the class ServiceHost, afaik this means its a standalone host (no IIS or ASP involvement). Note: I'm new into WCF ;) Now there are some issues with clients behind a firewall blocking the requests to remote port 8081 of our server (where the webservice runs). The easiest solution would be: run the webservice host at port 80 ... But: there is also a Apache 2.2 webserver running on the Windows Server, hosting some websites. By default it runs on port 80. My solution after some researching: use a virtual host to route requests to a virtual host (lets say http://webservice.[hostname]:80) to the webservice host (http://[hostname]:8081). Is this a good idea? Can Apache handle forwards to standalone webservice hosts? It would be nice if someone could lead me on to the right track :) Best regards, Niels

    Read the article

  • Question about the MIT License and open/clsoed source

    - by stck777
    Hallo, I have a small useful application (Websites managment tool developed in Java) which I want to publish online for free (every one can use it for free). But I do not want to make it opne source yet. It is additional affort to clean up the code, documantation and so on... And this is just a small useful tool. Can I use the MIT license in this case, or the MIT license obligates me to distribute the source code? Which license is best in this case? Thanks.

    Read the article

  • JavaScript Multidimensional Arrays

    - by JasonS
    This wasn't the question I was going to ask but I have unexpectedly run aground with JavaScript arrays. I come from a PHP background and after looking at a few websites I am none the wiser. I am trying to create a multi-dimensional array. var photos = new Array; var a = 0; $("#photos img").each(function(i) { photos[a]["url"] = this.src; photos[a]["caption"] = this.alt; photos[a]["background"] = this.css('background-color'); a++; }); Error message: photos[a] is undefined. How do I do this? Thanks.

    Read the article

  • 404 Error Hosting WCF Service via IIS 7.5 Shared Content

    - by Chad Gruka
    We're attempting to host a WCF Service (.NET 3.5 SP1) using Shared Content on IIS 7.5. At the moment it's returning a 404 error. My assumption at this point is that WCF can not be hosted via a UNC path (See workaroundHosting WCF service in IIS6 using UNC). Steps I've taken: - Established a FullTrust to/with the UNC path. - The service works hosting it on a local disk. - A basic HTML page renders without issue from the UNC path. - A ASPX page renders without issue from the UNC path. - Explicitly set "Full Control" permissions to the user running the service. The reason for using Shared Content in IIS 7.5 to host this WCF Service, and several other websites, in a web farm. Using Shared Content avoids the need for file replication between the nodes in the farm. (Note we are also using Shared Configuration to support this environment.)

    Read the article

  • Anybody have any success getting around IIS7 issues with WiX 3.5?

    - by Will
    WiX 3.5 still is having issues with creating websites in IIS7. I can get around most of them, but I'm getting hosed by the inability to configure the website authentication mode. Traditionally, you would just use the WebDirProperties to, for instance, turn on windows authentication: <iis:WebDirProperties Id="OMFG3.5BUGSUX" WindowsAuthentication="yes" /> Well, this doesn't work. So, now, once my nice lovely installer exits you get a big fat screw-you-unauthorized-jerk message. Not exactly professional looking. Does anybody have any suggestions/tips on working around these shortcomings in WiX?

    Read the article

  • Adding rows to UItableview and passing data between Viewcontrollers

    - by Jonathan
    I have a list of websites in a plist, when the app loads this populates a tableview (which is inside a navigation controller) But I have added an add button to the navigation bar and then created another View Controller to deal with inputting of the new website (Like Title and URL). It is very similar to how the contacts app looks. There is a table view and when you tap add, the add UI slides up. I have got all this working great so far. My Problem is what happens when the user taps Done. I can add the website to the plist (each website is a dictionary in the plist with 2 keys, atm) But then how do I tell the tableView to update? The table view has not been removed from the main window, just the add view has been added on top of the "screen". Another way of asking is, when you tap Save on Add a Contact screen: (not my image) How does the new contact's data (Xyz's data) get shown on the tableview?

    Read the article

  • JavaScript Multi-Dimensional Arrays

    - by JasonS
    This wasn't the question I was going to ask but I have unexpectedly run aground with JavaScript arrays. I come from a PHP background and after looking at a few websites I am none the wiser. I am trying to create a multi-dimensional array. var photos = new Array; var a = 0; $("#photos img").each(function(i) { photos[a]["url"] = this.src; photos[a]["caption"] = this.alt; photos[a]["background"] = this.css('background-color'); a++; }); Error message: photos[a] is undefined. How do I do this? Thanks.

    Read the article

  • What is a good Java crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • Django - How to do CSFR on public pages? Or, better yet, how should it be used period?

    - by orokusaki
    After reading this: http://docs.djangoproject.com/en/dev/ref/contrib/csrf/#how-to-use-it I came to the conclusion that it is not valid to use this except for when you trust the person who is using the page which enlists it. Is this correct? I guess I don't really understand when it's safe to use this because of this statement: This should not be done for POST forms that target external URLs, since that would cause the CSRF token to be leaked, leading to a vulnerability. The reason it's confusing is that to me an "external URL" would be on that isn't part of my domain (ie, I own www.example.com and put a form that posts to www.spamfoo.com. This obviously can't be the case since people wouldn't use Django for generating forms that post to other people's websites, but how could it be true that you can't use CSRF protection on public forms (like a login form)?

    Read the article

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

  • Unable to cast object of type 'System.Object[]' to type 'System.String[]'

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server website application. I am a newbie to ASP.NET. I am getting the above error, however, on the last line of the following code. Can you give me advice on how to fix this? This compiles correctly, but I encounter this error after running it. DataTable dt; Hashtable ht; string[] SingleRow; ... SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.Connection = conn2; SingleRow = (string[])dt.Rows[1].ItemArray; My error: System.InvalidCastException was caught Message="Unable to cast object of type 'System.Object[]' to type 'System.String[]'." Source="App_Code.g68pyuml" StackTrace: at ADONET_namespace.ADONET_methods.AppendDataCT(DataTable dt, Hashtable ht) in c:\Documents and Settings\Admin\My Documents\Visual Studio 2008\WebSites\Jerry\App_Code\ADONET methods.cs:line 88 InnerException:

    Read the article

  • svn merge - moved repository to a different server, and now getting 'has different repository root'

    - by HorusKol
    This is kind of similar to http://stackoverflow.com/questions/1601021/subversion-merge-has-different-repository-root-than - but appears to be a very different cause (especially as the answer for that question didn't resolve my problem). A while back, we swapped out the server where our SVN repositories are located - but we've been using an alias so that the old server name points to the new server. I've been getting in the habit where I will use the new server name wherever I checkout new working copies - but we having made changes to most of the current working copies as they are live websites. Until now, this hasn't been a problem - except that this morning I merged in some changes from my development branch to a working copy I have of the release version and I got the message "file has different repository root" and the merge stops dead. I know this is because I'm using the new server name when the development branch was updated via the old server name - but is there a simple way to fix this? Or if not a simple way - is there a well-documented way to fix this?

    Read the article

  • Using terminal vs KDE in linux?

    - by Ke
    Hi Im used to using nautilus within centos but have recently just got a VPS and quickly realising that using a KDE is unacceptable in this environment. Although I do find it so much quicker doing things like folder permissions in KDE rather than typing it all out in the terminal? Everyone I speak to says, use the terminal and I should learn this way as opposed to using the KDE, but theres certain things I just dont get How is it possible to make quick changes to scripts and viewing them in a browser etc , without a mouse or using KDE? and only using a terminal?? I am wondering how to develop websites just using the terminal??? How can it be quicker to type out/view permissions etc in the terminal when its instant and just a few clicks in the KDE? Any thoughts are much appreciated. I would love to understand the benefits but just cant seem to see them right now. Cheers Ke.

    Read the article

  • Using Timthumb to Resize Images from CKEditor

    - by Edward Coleridge Smith
    I am using CKEditor to do basic text and image input into my website. I have noticed that it is quite sporadic in it's method of generating HTML for images when you add them. (Sometimes it might use height and width tags, other times it might use CSS). I use Timthumb for on the fly image resizing on a number of other websites and find it very useful. I use a mod_rewrite rule in my .htaccess file to allow me to create domains like http://localhost/images/800x600/image.jpg and achieve resizing. I would like to somehow incorporate this into CKEditor. I cannot find how to do this looking through the documentation so I have tried post-processing the data produced by CKEditor using Regex, however as mentioned before CKEditor seems to be too sporadic to be able to get this to work all the time. Anyone else done this before? How did you achieve it?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >