Search Results

Search found 1083 results on 44 pages for 'mysite'.

Page 27/44 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • lightbox dynamic image retrieval

    - by GSTAR
    I am constructing a lighbox gallery, currently experimenting with FancyBox (http://fancybox.net) and ColorBox (http://colorpowered.com/colorbox). By default you have to include a link to the large version of the image so that the Lightbox can display it. However, I am wanting to have the image link URLs pointing to a script rather than directly to the image file. So for example, instead of: <a href="mysite/images/myimage.jpg"> I want to do: <a href="mysite/photos/view/abc123"> The above URL points to a function: public function actionPhotos($view) { $photo=Photo::model()->find('name=:name', array(':name'=>$view)); if(!empty($photo)) { $user=$photo->user; $this->renderPartial('_photo', array('user'=>$user, 'photo'=>$photo, true)); } } At some point in the future the function will also update the view count of the image. Now this approach is working to an extent - most images load up but some do not load up (the lightbox gets displayed in a malformed state). I think the reason for this is because it is not processing the function quick enough. For example when I click the "next" button it needs to go to the URL, process the function and retreive/output the response. Does anybody know how I can get this working properly?

    Read the article

  • Single-entry implementation gone wrong

    - by user745434
    I'm doing my first single-entry site and based on the result, I can't see the benefit. I've implemented the following: .htaccess redirects all requests to index.php at the root Url is parsed and each /segment/ is stored as an element in an array First segment indicates which folder to include (e.g. "users" » "/pages/users/index.php"). index.php file of each folder parses the remaining elements in the segments array until array is empty. content.php file of each folder is included if there are no more elements in the segments array, indicating that the destination file is reached Sample File structure ( folders in [] ): [root] index.php [pages] [users] index.php content.php [profile] index.php content.php [edit] index.php content.php [other-page] index.php content.php Request: http://mysite.com/users/profile/ .htaccess redirects request to http://mysite.com/index.php URL is parsed and segments array contains: [1] users, [2] profile index.php maps [1] to "pages/users/index.php", so includes that file pages/users/index.php maps [2] to pages/users/profile/index.php, so includes that file Since no other elements in the segments array, the contents.php file in the current folder (pages/users/profile) is included. I'm not really seeing the benefit of doing this over having functions that include components of the site (e.g. include_header(), include_footer(), etc.), so I conclude that I'm doing something terribly wrong. I'm just not sure what it is.

    Read the article

  • Modify url in browser using javascript?

    - by user246114
    Hi, Is it possible to change the url in the user's browser without actually loading a page, using javascript? I don't think it is (could lead to unwanted behavior), I'm in a situation where this would be convenient though: I have a web app which displays reports generated by users. Layout roughly looks like: ----------------------------------------------------------- Column 1 | Column 2 ----------------------------------------------------------- Report A | Report B | Currently selected report contents here. Report C | right now the user would be looking at a url like: www.mysite.com/user123 To see the above page. When the user clicks the report names in column 1, I load the contents of that report in column 2 using ajax. This is convenient for the user, but the url in their browser remains unchanged. The users want to copy the url for a report to share with friends, so I suppose I could provide a button to generate a url for them, but it would just be more convenient for them to have it already as the url in their browser, something like: www.mysite.com/user123/reportb the alternate is to not load the contents of the report in column 2 using ajax, but rather a full page refresh. This would at least have a linkable url ready for the user in their url bar, but not as convenient as using ajax. Thanks

    Read the article

  • Hacking the WordPress Category Widget

    - by Scott B
    The default WordPress categories widget does not allow excluding named categories. I've created a plugin which creates adds a Customized category widget to the "Available Widgets" listing which gives me some control over the items I want to exclude. Code is below... <?php /* Plugin Name: Custom Categories Widget Plugin URI: http://mysite.com Description: Removes the Specified Categories from the Default Categories Listing Author: Me Version: 1.0 Author URI: http://mysite.com */ function widget_my_categories() { wp_list_categories('exclude=1'); } function my_categories_init() { register_sidebar_widget(__('Custom Categories Widget'), 'widget_my_categories'); } add_action("plugins_loaded", "my_categories_init"); ?> However, I want the generated code to emulate the same look and feel as the default categories widget (ie, the word "categories" appears as a bullet in my widget, but as an h4 level heading element in the default categories widget. I want the same structure to be applied to my custom widget as the default categories widget has. I'd also like to give the user the options to specify the title of the categories listing (just as they can do in the default categories widget). btw, I'm using id 1 which is the default "uncategorized" category and assigning items to that category that I don't want to appear in the listing. Any help much appreciated! :)

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Apache : Illegal override option FileInfo

    - by Kave
    I have installed a new Ubuntu 12.04 Server and setup Apache and MySQL. I am just trying to replicate what I have in my current server and came across one single problem. - FileInfo Within these two files below: /etc/apache2/sites-available/default-ssl /etc/apache2/sites-available/default I need to add some overrides for the apache server. Original: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> New: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo, Indexes Order allow,deny allow from all </Directory> I have installed the following mods for Apache: sudo apt-get install lamp-server^ -y sudo apt-get install apache2.2-common apache2-utils openssl openssl-blacklist openssl-blacklist-extra -y sudo apt-get install curl libcurl3 libcurl3-dev php5-curl -y sudo apt-get install php5-tidy -y sudo apt-get install php5-gd -y sudo apt-get install php-apc -y sudo apt-get install memcached -y sudo apt-get install php5-memcache -y sudo a2enmod ssl sudo a2enmod rewrite sudo a2enmod headers sudo a2enmod expires sudo a2enmod php5 So When I do a restart with AllowOverride None, its all ok. sudo /etc/init.d/apache2 restart * Restarting web server apache2 ... waiting [OK] But as soon as I change the AllowOverride to FileInfo, Indexes Syntax error on line 11 of /etc/apache2/sites-enabled/000-default: Illegal override option FileInfo, Action 'configtest' failed. The Apache error log may have more information. ...fail! I can't see anything unusual in the error.log [Wed Jun 06 08:23:51 2012] [notice] caught SIGTERM, shutting down [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.1 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/1.0.1 configured -- resuming normal operations I get that warning because its a test server, nonetheless I get the same warning with AllowOverride None and yet it restarts the Apache server correctly. Therefore this warning should be harmless. Have I missed something? Thanks,

    Read the article

  • Set up FTP user with ProFTPD on Ubuntu

    - by kidrobot
    I want to set up a user "ftp" so they can upload and download files in my /home/httpd/mysite/public_html directory. All files in public_html are owned by user ftp and in group www-data so the ftp user looks like so: uid=108(ftp) gid=33(www-data) groups=33(www-data),65534(nogroup) When I try to connect via an FTP client I get 530 Login incorrect. ftp: Login failed. What do I need to uncomment/add to the proftpd.conf file to make this work?

    Read the article

  • How to move SharePoint authentication from AD to LDAP without breaking user profiles?

    - by Dan
    We have a bunch of users in a local Active Directory OU that access the SharePoint portal. We've just added LDAP authentication and pointed it at the organisation's global LDAP server, so out AD accounts are now redundant. Is there a way to re-map the authentication for a SharePoint (MOSS 2007) user/profile. That is, can we manually change a lot of users so that they log in with their LDAP credentials and get the same SharePoint MySite, groups, etc. as when they were authenticating via AD?

    Read the article

  • 500 internal server error running php file in cgi-bin

    - by vvvvvvv
    500 internal server error is shown when i access http://mysite.com/cgi-bin/test.php test.php <p> title here</p> <?php echo "hi"; ?> error log shows (8)Exec format error: exec of '/var/www/cgi-bin/test.php' failed'. Premature end of script headers: test.php. solved it by adding AddHandler application/x-httpd-php .php

    Read the article

  • Wordpress: Upload has failed to upload due to an error Missing a temporary folder

    - by JL.
    I've installed Wordpress on IIS, Running on Windows 7 (64Bit). Rest of the site is working perfectly. Except when I try upload images into media. I then get this error message "sample.jpg" has failed to upload due to an error Missing a temporary folder I have edited the php.ini file to point to a directory : D:\work\websites\mySite Saved that and done an IIS reset. Problem still persists. Any ideas what is wrong?

    Read the article

  • htpasswd not working when set up in the httpd.conf file

    - by Shamoon
    My httpd.conf file looks like: <Directory "/path/to/mysite"> AuthType Basic AuthName "Restricted Files" AuthUserFile "/path/to/.htpasswd" Require user valid-user Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> I generated my .htpasswd file using the htpasswd command: $ htpasswd ~/.htpasswd myuser So now when I restart apache, it prompts for a username and password, however, when I type in my username and password, it just prompts again. Any help would be appreciated. Thanks My .htpasswd file looks like: myuser:$aaa1$rsU3A8zu$1xiIou2elcL3QLIPhzsaj0

    Read the article

  • How to bind a domain for MS Project Server 2010?

    - by Gk
    I've installed MS Project Server 2010 and have to connect via a URL like this one: http://mysite/pwa/ I want to connect using new domain like that: http://newsite/. I can use redirection settings on IIS but cannot connect by Project Client. Anyway to do that thing? Thanks.

    Read the article

  • Set HTTP condition for redirect rule

    - by Török Gábor
    A have a redirect rule in my .htaccess that forwards agent from A.html to B.html using the following pattern: Redirect 301 /A.html http://mysite.com/B.html Since the Redirect directive requires to set the target host, is it possible to let this rule prevail only on a specific host? I have both a test and deploy domain, and only want it on the deploy domain. I can set HTTP conditions for Rewrite rules, but how can I for HTTP Redirects?

    Read the article

  • Hide .php add a slash

    - by Matthew
    This script works perfect it forces the trailing slash and hides the .php extension BUT! it does not redirect people going directly to the .php extension. How can I also force people going directly to the file.php to /file/ RewriteEngine On RewriteRule ^(.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.mysite.com/$1/ [R=301,L]

    Read the article

  • Apache log - file does not exist

    - by Ivan
    I have quite a few of these in Apache logs piling up every day: [Mon Jun 09 20:42:58 2014] [error] [client 180.153.214.181] File does not exist: /home/user/public_html/ajax.googleapis.com, referer: http://www.mysite.com//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js I have over 200k visitors per day but a few of them like a dozen or so are generating the above error. I can't figure out what may be causing it. Checked the html code and it's all good so I ran out of ideas.

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • Append symbolic link to served media

    - by Hellnar
    Hello, I have two folders such as nonserved/ folder1/ folder2/ and a served folder via Apache media/ js/ css/ img/ In the end, I want to include/append contents of /nonserved to /media so that www.mysite.com/media will be as such: /media /js /css /img /folder1 /folder2 I am running Ubuntu Server, I am up for either apache config or symbolic link based answer :) Plus nonserved folder is rather dynamic thus manual symbolic linking to each folder is impossible.

    Read the article

  • Allowing Apache in Ubuntu to access files in NTFS hard drive

    - by lyrae
    I have LAMP running in Ubuntu. However, my files are located on a separate NTFS hard drive (/media/shared/mysite/). going to http://localhost gives me a 403 how can i, securely, allow apache to read/write the NTFS disk? 'shared' is currently being mounted when system boots. here's the entry in fstab: /dev/sda1 /media/shared ntfs-3g quiet,defaults,locale=en_US.utf8,umask=000 0 0

    Read the article

  • Subdomain redirect

    - by dfilkovi
    When I enter some dummy subdomain like test.mysite.com I always get my site no matter that that subdomain does not exist, where do I tweak that in WHM so that if that subdomain does not exist, user will get not found or something?

    Read the article

  • .htaccess help required for appache server

    - by mathew
    I searching for a redirection code for my url: what I want is when some one search in my site it should redirect example: if some one search google.com on mysite then in address line it should look like www.mydomain.com/google.com can be in $_POST method or $_GET how do I do that??

    Read the article

  • jQuery autocomplete not always working on elements

    - by PoweRoy
    I'm trying to create a greasemonkey script (for Opera) to add autocomplete to input elements found on a webpage but it's not completely working. I first got the autocomplete plugin working: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://jquery.com/src/jquery-latest.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete("http://mysite/jquery_autocomplete.php", { dataType: 'jsonp', parse: function(data) { var rows = new Array(); for(var i=0; i<data.length; i++){ rows[i] = { data:data[i], value:data[i], result:data[i] }; } return rows; }, formatItem: function(row, position, length) { return row; }, }); } I see the 'test autocomplete' but using the Opera debugger(firefly) I don't see any communication to my php page. (yes mysite is fictional, but it works here) Trying it on my own page: <body> no autocomplete: <input type="text" name="q1" id="script_1"><br> autocomplete on: <input type="text" name="q2" id="script_2" autocomplete="on"><br> autocomplete off: <input type="text" name="q3" id="script_3" autocomplete="off"><br> autocomplete off: <input type="text" name="q4" id="script_4" autocomplete="off"><br> </body> This works, but when trying on another pages it sometimes won't: e.g. http://spitsnieuws.nl/ works but http://nu.nl and http://dumpert.nl don't work. Trying the autocomplete of jquery ui has more problems: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); // All your GM code must be inside this function function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete({ source: function(request, response) { $.ajax({ url: "http://mysite/jquery_autocomplete.php", dataType: "jsonp", success: function(data) { response($.map(data, function(item) { return { label: item, value: item } })) } }) } }); } This will work on my html page, http://spitsnieuws.nl and http://dumpert.nl but not on http://nu.nl. (dumpert didn't work on the plugin autocomplete) //http://spitsnieuws.nl <input class="frmtxt ac_input" type="text" id="zktxt" name="query" autocomplete="off"> //http://dumpert.nl <input type="text" name="srchtxt" id="srchtxt"> //http://nu.nl <input id="zoekfield" name="q" type="text" value="Zoek nieuws" onfocus="this.select()" type="text"> Anyone know why the autocomplete functionality doesn't work? Why the request to the php page is not being made? And why I can't add my autocomplete to google.com?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >