Search Results

Search found 285 results on 12 pages for 'pablo santos'.

Page 10/12 | < Previous Page | 6 7 8 9 10 11 12  | Next Page >

  • Cloud Apps and Single Sign-On (AD integration)

    - by Pablo Alvim
    I've been investigating some cloud vendors and the ability to implement single sign-on with them, especially when it comes to AD (Active Directory) integration. So far I've learned that with Azure this is possible through ADFS and the AppFabric Access Control offer. In AWS, since it is possible to create a VPN and see EC2 instances as a natural extension of a private datacenter, I believe implementing SSO would be rather simple (not sure if I'm right on this one... Please correct me if I'm wrong). With App Engine though, even though there is some documentation on AD synchronization (not full integration) for Google Apps, I'm struggling to find out whether AD integration would be possible... Is there any strategy for that? Any bit of information on cloud apps and AD integration will be appreciated!

    Read the article

  • Creating an http proxy

    - by Pablo Fernandez
    This might be a beginner question but I did some googling and couldn't find an answer so bear with me. I was wondering if there was a resource, tutorial, wiki, etc that explained how to create an http proxy. It would be good to see it in ruby but any language will do if that's not possible. Thanks a lot

    Read the article

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

  • Real pagination vs Next and Previous buttons

    - by Pablo
    By real pagination i mean something like this when in page 3: <<Previous 1 | 2 | {3} | 4 | 5 |...| 15 | Next>> By Next and Previous buttons i mean something like this when in page 3: <<previous Next>> Performance wise im sure the Previous and Next Buttons are better since unlike the real pagination it doesn't require over-querying the database. By over-querying the database i mean getting more information from the database than what you will need to display on the page. My theory is that the Previous and Next Buttons can drastically increase a site performance as it only requires the exact information you will need to display on a page, please correct me if im wrong on this. so, do users really have preference when it comes to this two options? is it just a Developer preference and its convenience? Which one do you prefer? why? *Note: Previous and Next Buttons are usually labeled Newer and older.

    Read the article

  • Receiving SQL Server events from a CLR function

    - by Pablo Lerner
    I wrote a CLR class with several methods, which are linked as functions in a SQL Server 2005 database. When several of these functions are used in scope of one transaction or connection, I need another one to be automatically executed to clean up some stuff, at the time of transaction or connection close (either time is good for now, later I will decide which is best). I figure that receiving events from another new CLR functions can do, but I don't know how to achieve that. Can anyone point me to information on modules, documents or whatever, that can help me understand how to receive transaction or connection closing events in a CLR class, or how to execute a particular function when these events occur?

    Read the article

  • git divergent renaming

    - by pablo
    Hi, I'd like to know how you handle a situation like this in Git: create branch task001 from master master: modify foo.c and rename it to bar.c task001: modify foo.c and rename it to moo.c merge task001 to master What Git tells me is: CONFLICT (rename/rename): Rename "foo.c"->"bar.c" in branch "HEAD" rename "foo.cs"->"moo.c" in "task001" Automatic merge failed; fix conflicts and then commit the result. How should I solve it? I mean, I still want to merge the two files once the name conflict is resolved. Thanks.

    Read the article

  • WPF: How do I bind a Control to a formula composed of several dependency properties?

    - by Pablo
    Hi all, I'm working on Expression Blend and I'm currently designing a custom control which has a Grid with 5 rows inside, and also has two Dependency properties: "Value", and "Maximum". Three of the rows have fixed height, and what I'm trying to do is set the remaining rows height to "Value/Maximum" and "1-Value/Maximum" respectively. How do I go and do that? When I set the height to "Value" it seems to react, but when I go and set it to "Value/Maximum" it stops working. I'm still a bit new around WPF, so there must be another way to achieve what I'm intending, but after searching I couln't find my problem elsewhere. Code: <Grid x:Name="LayoutRoot" Width="Auto" Background="Transparent"> <Grid.RowDefinitions> <RowDefinition Height="32"/> <RowDefinition Height="{Binding Path=(Value/Maximum), ElementName=UserControl, Mode=Default}"/> <RowDefinition Height="16"/> <RowDefinition Height="{Binding Path=(1-Value/Maximum), ElementName=UserControl, Mode=Default}"/> <RowDefinition Height="32"/> </Grid.RowDefinitions> (...) By the way, Value is always a not negative double less than or equal to Maximum; so the result of the division will be number between 0.0 a 1.0. I want a "star" instead of "pixel" row height.

    Read the article

  • Java Custom exception throw behaves differently between different Projects

    - by Pablo
    I am attempting to call the following in my code: public void checkParticleLightRestriction(Particle parent) throws LightException { if ( parent == null ) { throw new LightException("quantum-particle-restrict.23", this); } In one Project the exception is thrown and the effect is similar to calling "return" whereby I am returned back to the point immediately succeeding where this method was called. However in another Project I get thrown completed out of the current package and to a point way prior to the point preceeding this method. It likes instead of being kicked out of a bar I am being deported all the way out of the country. My option are the wrap the throw in a try / catch but I am wondering why this difference in behaviour beween the 2 projects ?

    Read the article

  • Visual Studio 2008 XML Documentation

    - by Pablo
    Hello, I can't seem to find the "Build" option under "Configuration Properties" folder in the Property Pages of my project. I've been looking everywhere trying to figure out how to do it. I followed the direations here: http://msdn.microsoft.com/en-us/magazine/cc302121.aspx More specifically here: http://msdn.microsoft.com/en-us/library/azt1z1eh.aspx ("To build the XML Documentation sample within Visual Studio" section) I want to generate XML comments for my project in the build, but my "Configuration Properties" window does not have "folders" or a "Build" option under it. I'm using Visual Studio Team System 2008, any ideas? EDIT - This is a website [project-less] and it is written in both C# and VB.NET. Do I need to download any tools? I found GhostDoc but that's not helping me very much.

    Read the article

  • odd behavior setting timeouts inside a function with global references in javascript

    - by Pablo
    Here is the the function and the globals: $note_instance = Array(); $note_count = 0; function create(text){ count = $note_count++; time = 5000; $note_instance[count] = $notifications.notify("create", text); setTimeout(function(){ $note_instance[count].close() }, time); } The function simply opens a notification, a sets a timeout to close it in 5 seconds. so if i call this create("Good Note 1"); create("Good Note 2"); create("Good Note 3"); Ecah note should close 5 seconds from their creation, however always and only the last note closes, in this case "Good Note 3". Each note object has its own entry in the the $note_instance global array so the timeouts should no be overwriting themselves. What am i missing here folks? Thanks in advance

    Read the article

  • Storing an encrypted cookie with Rails

    - by J. Pablo Fernández
    I need to store a small piece of data (less than 10 characters) in a cookie in Rails and I need it to be secure. I don't want anybody being able to read that piece of data or injecting their own piece of data (as that would open up the app to many kinds of attacks). I think encrypting the contents of the cookie is the way to go (should I also sign it?). What is the best way to do it? Right now I'm doing this, which looks secure, but many things looked secure to people that knew much more than I about security and then it was discovered it wasn't really secure. I'm saving the secret in this way: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) cookies[:secret] = { :value => encryptor.encrypt(secret), :domain => "example.com", :secure => !(Rails.env.test? || Rails.env.development?) } and then I'm reading it like this: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) secret = encryptor.decrypt(cookies[:secret]) Is that secure? Any better ways of doing it? Update: I know about Rails' session and how it is secure, both by signing the cookie and by optionally storing the contents of the session server side and I do use the session for what it is for. But my question here is about storing a cookie, a piece of information I do not want in the session but I still need it to be secure.

    Read the article

  • Relating categories with tags using SQL

    - by Pablo
    I want be able to find tags of items under the a certain category. Following is example of my database design: images +----------+-----+-------------+-----+ | image_id | ... | category_id | ... | +----------+-----+-------------+-----+ | 1 | ... | 11 | ... | +----------+-----+-------------+-----+ | 2 | ... | 12 | ... | +----------+-----+-------------+-----+ | 3 | ... | 11 | ... | +----------+-----+-------------+-----+ | 4 | ... | 11 | ... | +----------+-----+-------------+-----+ images_tags +----------+--------+ | image_id | tag_id | +----------+--------+ | 1 | 53 | +----------+--------+ | 3 | 54 | +----------+--------+ | 2 | 55 | +----------+--------+ | 1 | 56 | +----------+--------+ | 4 | 57 | +----------+--------+ tags and categories each have their own table relating the id to an actual name(text). So my question is how will i find out that images with category_id=11 have have the tag_id 53 54 55 56 57. In other words how to find the tags that images in certain category have?

    Read the article

  • PHP image resize on the fly vs storing resized images

    - by Pablo
    I'm building a image sharing site and would like to know the pros and cons of resizing images on the fly with php and having the resized images stored. Which is faster? Which is more reliable? how big is the gap between the two methods in speed and performance? Please note that either way the images go through a PHP script for statistics like views or if hotlinking is allow etc... so is not like it will be a direct link for images if i opt to store the resize images. I'll appreciated your comments or any helpful links on the subject, Thanks.

    Read the article

  • How to Split a Big Postscript file (3000 pages) into one individual file per page (using Windows 7)?

    - by Pablo
    Hi, I'm having trouble doing the following: I have a big PDF file that I converted to postscript (for commercial printing). The resulting file is too big to be processed by the printer (machine). I've been trying to find a way to either: Convert from the original (many pages) PDF file to many Postscript file (one postcript file per PDF page in original PDF file(. Convert from PDF to PS (or even EPS). - I managed to do this Then split the PS file into a collection of smaller files. I've tried using Ghostscript, but it is all gibberish to me. Thanks. PS. If you have a good GS tutorial (for dummies?), please share the link.

    Read the article

< Previous Page | 6 7 8 9 10 11 12  | Next Page >