Search Results

Search found 3773 results on 151 pages for 'paul james'.

Page 83/151 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Tinyurl API Example - Am i doing it right :D

    - by Paul Weber
    Hi ... we use super-long Hashes for the Registration of new Users in our Application. The Problem is that these Hashes break in some Email Clients - making the Links unusable. I tried implementing the Tinyurl - API, with a simple Call, but i think it times out sometimes ... sometimes the mail does not reach the user. I updated the Code, but now the URL is never converted. Is Tinyurl really so slow or am i doing something wrong? (I mean hey, 5 Seconds is much in this Times) Can anybody recommend me a more reliable service? All my Fault, forgot a false in the fopen. But i will leave this sample of code here, because i often see this sample, wich i think does not work very reliable: return file_get_contents('http://tinyurl.com/api-create.php?url='.$u); This is the - i think fully working sample. I would like to hear about Improvements. static function gettinyurl( $url ) { $context = stream_context_create( array( 'http' => array( 'timeout' => 5 // 5 Seconds should be enough ) ) ); // get tiny url via api-create.php $fp = fopen( 'http://tinyurl.com/api-create.php?url='.$url, 'r', $context); // open (read) api-create.php with long url as get parameter if( $fp ) { // check if open was ok $tinyurl = fgets( $fp ); // read response if( $tinyurl && !empty($tinyurl) ) // check if response is ok $url = $tinyurl; // set response as url fclose( $fp ); // close connection } // return return $url; // return (tiny) url }

    Read the article

  • C# WebClient OpenRead url

    - by Octopus-Paul
    So i have this program that fetch a page using a short link (I used Google url shortener) to build my example i used the code from Using WebClient in C# is there a way to get the URL of a site after being redirected? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Net; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyWebClient client = new MyWebClient(); client.OpenRead("http://tinyurl.com/345yj7x"); Uri uri = client.ResponseUri; Console.WriteLine(uri.AbsoluteUri); Console.Read(); } } class MyWebClient : WebClient { Uri _responseUri; public Uri ResponseUri { get { return _responseUri; } } protected override WebResponse GetWebResponse(WebRequest request) { WebResponse response = base.GetWebResponse(request); _responseUri = response.ResponseUri; return response; } } } I do not understant a thing: when i do client.OpenRead("http://tinyurl.com/345yj7x"); this downloads the page that the url points to? If this method downloads the page, I need something to get me only the url, so if there a method to get only some headers, or only the url please let me know.

    Read the article

  • How to stop .Net HttpWebRequest.GetResponse() raising an exception

    - by James
    Surely, surely, surely there is a way to configure the .Net HttpWebRequest object so that it does not raise an exception when HttpWebRequest.GetResponse() is called and any 300 or 400 status codes are returned? Jon Skeet does not think so, so I almost dare not even ask, but I find it hard to believe there is no way around this. 300 and 400 response codes are valid responses in certain circumstances. Why would we be always forced to incur the overhead of an exception? Perhaps there is some obscure configuration setting that evaded Jon Skeet? Perhaps there is a completely different type of request object that can be used that does not have this behavior? (and yes, I know you can just catch the exception and get the response from that, but I would like to find a way not to have to). Thanks for any help

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Efficiently making web pages from multiple servers

    - by james.bcn
    I want to create a service that allows diverse web site owners to integrate material from my web servers into content served from their own servers. Ideally the resulting web page would only be delivered from the web site owners server, and the included content would be viewed as being part of the site by Google - which I think would rule out iframes or client-side Javascript to get the content from my server (although I may be wrong about that?). Also the data wouldn't actually be updated that often, say once a day, so it would be inefficient to get the data from my web servers with every request. Finally, the method needs to be as simple as possible so that it is easy for web site owners to integrate into their own sites. Are there any good methods for doing this sort of thing? If not then I guess the simple way is with iframes or Javascript.

    Read the article

  • javascript AOP advice to statements

    - by Paul
    Some javascript libraries, such as JQuery and Dojo, provide AOP APIs that can introduce before, after, or around advice to a function. Just wondering whether there is any javascript AOP libraries can introduce such advices to an individual statement?

    Read the article

  • Using IvyDE with different workspaces on different branches

    - by James Woods
    I am having problems using IvyDE when I have different workspaces for different branches. I have "Resolve dependencies in workspace" switched on. But everytime I change to a different workspace I have to remember to manually clean the caches out. This is because IvyDE always uses the default cache for resolving dependencies within a workspace, so when switching between workspaces the cache can be polluted by different versions. It would seem that it is impossible to work with two different workspaces at the same time. I cannot find a way to configure the location that IvyDE uses to cache the project dependencies. It does not appear to use the caches defined in the ivysettings.xml

    Read the article

  • Block users using auto-clickers

    - by James Simpson
    I'm having some problems with users cheating my online game by using macros to automatically click certain spots on the screen in a certain order to automate various tasks without having to actually be playing the game. Are there any methods that can be used to block this kind of activity without having to plaster CAPTCHAs all over the site and ruin the experience for the honest users?

    Read the article

  • Wordpress Navigation

    - by James Pellerano
    I am working on a Wordpress Theme, I need to work on the navigation, I am having a little trouble creating it. The navigation I am looking for looks like this: www.neu.edu/humanities. I have gotten this far: if (is_front_page()) { wp_list_pages('title_li=&exclude=12&depth=1'); } else { // display the subpages of the current page while // display all of the main pages and all of the // and display the parent pages while on the subpages }

    Read the article

  • sql foreign keys

    - by Paul Est
    I was create tables with the syntax in phpmyadmin: DROP TABLE IF EXISTS users; DROP TABLE IF EXISTS info; CREATE TABLE users ( user_id int unsigned NOT NULL auto_increment, email varchar(100) NOT NULL default '', pwd varchar(32) NOT NULL default '', isAdmin int(1) unsigned NOT NULL, PRIMARY KEY (user_id) ) TYPE=INNODB; CREATE TABLE info ( info_id int unsigned NOT NULL auto_increment, first_name varchar(100) NOT NULL default '', last_name varchar(100) NOT NULL default '', address varchar(300) NOT NULL default '', zipcode varchar(100) NOT NULL default '', personal_phone varchar(100) NOT NULL default '', mobilephone varchar(100) NOT NULL default '', faxe varchar(100) NOT NULL default '', email2 varchar(100) NOT NULL default '', country varchar(100) NOT NULL default '', sex varchar(1) NOT NULL default '', birth varchar(1) NOT NULL default '', email varchar(100) NOT NULL default '', PRIMARY KEY (info_id), FOREIGN KEY (email) REFERENCES users(email) ON UPDATE CASCADE ON DELETE CASCADE ) TYPE=INNODB; But shows the error "#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TYPE=INNODB' at line 11 " If i remove the TYPE=INNODB in the end of create the tables, it will show the error "#1005 - Can't create table 'curriculo.info' (errno: 150) ".

    Read the article

  • More efficient approach to XSLT for-each

    - by Paul
    I have an XSLT which takes a . delimted string and splits it into two fields for a SQL statement: <xsl:for-each select="tokenize(Path,'\.')"> <xsl:choose> <xsl:when test="position() = 1 and position() = last()">SITE = '<xsl:value-of select="."/>' AND PATH = ''</xsl:when> <xsl:when test="position() = 1 and position() != last()">SITE = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2 and position() = last()">AND PATH = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2">AND PATH = '<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() != last()">.<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() = last()">.<xsl:value-of select="."/>' </xsl:when> <xsl:otherwise>zxyarglfaux</xsl:otherwise> </xsl:choose> </xsl:for-each> The results are as follows: INPUT: North OUTPUT: SITE = 'North' AND PATH = '' INPUT: North.A OUTPUT: SITE = 'North' AND PATH = 'A' INPUT: North.A.B OUTPUT: SITE = 'North' AND PATH = 'A.B' INPUT: North.A.B.C OUTPUT: SITE = 'North' AND PATH = 'A.B.C' This works, but is very lengthy. Can anyone see a more efficient approach? Thanks!

    Read the article

  • Custom keys with NERDComment plugin and remapped Leader?

    - by Paul Wicks
    I'm trying to set up the NERDComment plugin in vim, but I'm having some trouble with the keys. I'd like to set the basic toggle functionality (comment a line if it's uncommented, uncomment if it's commented) to be c. The problem is that I've remapped the Leader to be ,, which is the same key that NERD wants for all of it's hotkeys. Anyone have any idea as to how to set this up?

    Read the article

  • Ruby socket server thread question: how to send to all clients?

    - by Paul
    I'm making a TCP socket server(ruby). A thread is created for each connected client. At some point I try to send data to all connected clients. The thread aborts on exception while trying.(ruby 1.8.7) require 'socket' # I test it home right now server = TCPServer.new('localhost', 12345); while(session = server.accept) #Here is the thread being created Thread.new(session) do |s| while(msg = s.gets) #Here is the part that causes the error Thread.list.each { |aThread| if aThread != Thread.current #So what I want it to do is to echo the message from one client to all others #But for some reason it doesn't, and aborts on the following string aThread.print "#{msg}\0" end } end end Thread.abort_on_exception = true end What am I doing wrong?

    Read the article

  • Execute javascript on IIS server

    - by James Westgate
    I have the following situation. A customer uses JavaScript with jQuery to create a complex website. We would like to use JavaScript and jQuery on the server (IIS) for the following reasons: Skills transfer - we would like to use JavaScript and jQuery on the server and not have to use eg VB Script. / classic asp. .Net framework/Java etc is ruled out because of this. Improved options for search/accessibility. We would like to be able to use jQuery as a templating system, but this isn't viable for search engines and users with js turned off - unless we can selectively run this code on the server. There is significant investment in IIS and Windows Server, so changing that is not an option. I know you can run jScript on IIS using windows Script host, but am unsure of the scalability and the process surrounding this. I am also unsure whether this would have access to the DOM. Here is a diagram that hopefully explains the situation. I was wondering if anyone has done anything similar?

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • Code to strip diacritical marks using ICU

    - by Paul J. Lucas
    Can somebody please provide some sample code to strip diacritical marks (i.e., replace characters having accents, umlauts, etc., with their unaccented, unumlauted, etc., character equivalents, e.g., every accented é would become a plain ASCII e) from a UnicodeString using the ICU library in C++? E.g.: UnicodeString strip_diacritics( UnicodeString const &s ) { UnicodeString result; // ... return result; } Assume that s has already been normalized. Thanks.

    Read the article

  • Downloading a file in ASP.NET (through the server) while streaming it to the user

    - by James Teare
    My ASP.NET website currently downloads a file from a remote server to a local drive, although when users access the site they have to wait for the server to finish downloading the file until they can then download the file from my ASP.NET website. Is it possible to almost stream the download from the remote website - through my ASP.NET website directly to the user (a bit like a proxy) ? My current code is as follows: using (var client = new WebClientEx()) { client.DownloadFile(downloadURL, "outputfile.zip"); } WebClient class: public class WebClientEx : WebClient { public CookieContainer CookieContainer { get; private set; } public WebClientEx() { CookieContainer = new CookieContainer(); } protected override WebRequest GetWebRequest(Uri address) { var request = base.GetWebRequest(address); if (request is HttpWebRequest) { (request as HttpWebRequest).CookieContainer = CookieContainer; } return request; } }

    Read the article

  • How do chains work in Rainbow tables?

    - by James Moore
    Hello, I was wondering if should one could explain in detail how chains work in rainbow tables as though you would a complete novice but with relevance to programming. I understand that a chain is 16 bytes long. 8 bytes mark the starting point and 8 mark the end. I also understand that in the filename we have the chain length i.e. 2400. Which means that between our starting point and end point in just 16 bytes we have 2400 possible clear texts? What? How does that work? in those 16 bytes how do i get my 2400 hashes and clear texts or am i miss understanding this? Your help is greatly appreciated. Thanks P.s. I have read the related papers and googled this topic a fair bit. I think im just missing something important to make these gears turn.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >