Search Results

Search found 26427 results on 1058 pages for 'google scripts'.

Page 594/1058 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Does git clone work through NTLM proxies?

    - by AndreaG
    I've tried both using export http_proxy=http://[username]:[pwd]@[proxy] and git config --global http.proxy http://[username]:[pwd]@[proxy]. I couldn't make it work. It looks like git uses Basic authentication: Initialized empty Git repository in /home/.../.git/ * Couldn't find host github.com in the .netrc file, using defaults * About to connect() to github.com port 8080 (#0) * Trying 10.... * Connected to github.com (10....) port 8080 (#0) * Proxy auth using Basic with user '...' > GET http://github.com/sunlightlabs/fiftystates.git/info/refs HTTP/1.1 Proxy-Authorization: Basic MD... User-Agent: git/1.6.1.2 Host: github.com Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive < HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to t he Web Proxy filter is denied. ) < Via: 1.1 ... < Proxy-Authenticate: Negotiate < Proxy-Authenticate: Kerberos < Proxy-Authenticate: NTLM < Connection: Keep-Alive < Proxy-Connection: Keep-Alive < Pragma: no-cache < Cache-Control: no-cache < Content-Type: text/html < Content-Length: 4118 * The requested URL returned error: 407 * Closing connection #0 fatal: http://github.com/sunlightlabs/fiftystates.git/info/refs download error - The requested URL returned error: 407 Google search returned mixed and probably not updated results. Somewhere it says that curl is (was?) used under the hood, but its options are (were?) hardwired into code. For example, curl --proxy-ntlm --proxy ...:8080 google.com works, and I'd like to use the same option with git. I need some more definite answers here: has anybody succeed using git through Windows proxies? Which version? Thanks.

    Read the article

  • How do I specify Open ID Realm in spring security ?

    - by Salvin Francis
    We are using Spring security in our application with support for username / password based authentication as well as Open id based authentication. The issue is that google gives a different open id for the return url specified and we have at least 2 different entry points in our application from where open id is configured into our system. Hence we decided to use open id realm. http://blog.stackoverflow.com/2009/0...ue-per-domain/ http://groups.google.com/group/googl...unts-api?pli=1 how is it possible to integrate realm into our spring configuration/code ? This is how we are doing it in traditional openid library code: AuthRequest authReq = consumerManager.authenticate(discovered, someReturnToUrl,"http://www.example.com"); This works and gives same open id for different urls from our site. our configuration: Code: ... <http auto-config="false"> <!-- <intercept-url> tags are here --> <remember-me user-service-ref="someRememberedService" key="some key" /> <form-login login-page="/Login.html" authentication-failure-url="/Login.html?error=true" always-use-default-target="false" default-target-url="/MainPage.html"/> <openid-login authentication-failure-url="/Login.html?error=true" always-use-default-target="true" default-target-url="/MainPage.html" user-service-ref="someOpenIdUserService"/> </http> ... <beans:bean id="someOpenIdUserService" class="com.something.MyOpenIDUserDetailsService"> </beans:bean> <beans:bean id="openIdAuthenticationProvider" class="com.something.MyOpenIDAuthenticationProvider"> <custom-authentication-provider /> <beans:property name="userDetailsService" ref="someOpenIdUserService"/> </beans:bean> ...

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • Transferring websites from x64 to x86 server

    - by Ke
    Hi, I run a x64 staging server here along with the following: Solr Java etc. However, I am about to get a linode vps for production and quickly realising that x86 is the way to go for their lowest RAM package (thinking to upgrade later). My staging server is x64 with 12gb ram, so going down to 300mb ram is going to feel devilishly slow ;/ Here are my questions: 1) Will I have problems transferring my scripts, dbs etc from a x64 to x86 server? e.g. solr indexes 2) Is it worth going for the x86 package? I am probably going to upgrade later down the line and x64 might be better for the servers with more RAM? should I stick with x64 instead as there isnt much difference when using with low RAM? Cheers Ke

    Read the article

  • How important is it to use SSL?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • MySQL on a laptop for remote workers - MyISAM keeps corrupting

    - by Jonathon
    We have an application that is used by remote, mobile workers. It intalls WAMP (Server2Go) on a laptop and uses MySQL to store data locally. All tables are MyISAM. Once a day, the workers sync the database to our central server via HTTP scripts that query the data and post it to our site. The problem is that many of these laptop database tables are corrupting continually. It appears that MySQL acts like it saves the information (I don't get any query errors), but the table gets corrupt. I have to repair the table constantly (which removes several rows of data in the process). Does anyone have any ideas about how to work around this problem? Would it be wise to switch to InnoDB on the laptops? How about a different database system altogether. I have looked at MySQL Embedded, but it appears to be the same engine as the regular MySQL.

    Read the article

  • [Zend YouTubeAPP] failed to upgrade a token

    - by kwokwai
    Hi all, I was testing Zend Gdata 1.10.1 in my localhost. I downloaded Zend Gdate from this link: http://framework.zend.com/download/webservices Inside the Zend Gdata zip file, there was a folder called demos. I extracted it and used the YouTudeVideoApp to upload a sample video to Youtube. But every time after I logged into Youtube, before it redirected me to my localhost, I received a warning message like this warning message: localhost: This website is registered with Google to make authorization requests, but has not been configured to send requests securely. We recommend that you continue the process only if you trust the following destination: localhost:8080/youtube/operations.php So I googled on how to resolve the problem of getting this warning message when I saw some people suggesed changing the value of $secure to True in operation.php. Here is the script mentioned: function generateAuthSubRequestLink($nextUrl = null) { $scope = 'http://gdata.youtube.com'; $secure = true; $session = true; if (!$nextUrl) { generateUrlInformation(); $nextUrl = $_SESSION['operationsUrl']; } $url = Zend_Gdata_AuthSub::getAuthSubTokenUri($nextUrl, $scope, $secure, $session); echo '<a href="' . $url . '"><strong>Click here to authenticate with YouTube</strong></a>'; } After I altered the value of $secure to True, I found that the warning message changed to this: localhost: Registered, secure. This website is registered with Google to make authorization requests The new warning message is somehow shorter and looks better than the previous warning message. But once I pressed the Allow Access button, it turned out to be this: ERROR - Token upgrade for CI3M6_Q3EOGkxoL-___wEYjffToQQ failed : Token upgrade failed. Reason: Invalid AuthSub header. Error 401 ERROR - Unknown search type - '' I don't know why this happened. Could you help me solve the problem please?

    Read the article

  • Colorbox does not refresh its contents when I change URL

    - by Josef Sábl
    I am trying to make use of Colorbox on my webpage. One specific feature does not work as expected, though. In their example (Outside Webpage - Iframe specifically) Google appears in the Colorbox when clicked. When you change href on that link (Using Firebug or JavaScript, e.g.) to, let's say, Yahoo, it works as expected and Yahoo is displayed after click. But not in my case. Once I click the link, I have no way to change the URL. Href, as displayed in browser, changes (to Yahoo), but click always opens the first page (Google). What might be the problem? I almost copy-pasted their example: <script src="/lib/jquery.js"></script> <script src="/lib/jquery.colorbox.js"></script> <p><a class='example7' href="http://yahoo.com">Outside Webpage (Iframe)</a></p> <script> $(document).ready(function(){ $(".example7").colorbox({width:"80%", height:"80%", iframe:true}); }); </script> My jquery version is 1.4.1 but I also tried same version as their example uses (1.3.2)

    Read the article

  • How to macro-ify ant targets?

    - by Jonas Byström
    I want to be able to have different targets doing nearly the same thing, as so: ant build <- this would be a normal (default) build ant safari <- building the safari target. The targets look like this: <target name="build" depends="javac" description="GWT compile to JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.target}"/> </java> </target> <target name="safari" depends="javac" description="GWT compile to Safari/JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.safari.target}"/> </java> </target> (Nevermind the first thought that strikes: throw out ant! That's not an option just yet.) I tried using macrodef, but got a strange error message (even though the message didn't imply it, it think it had to do with putting a target in sequential). I don't want to do ant -Dwhatever=nevermind. Any ideas?

    Read the article

  • need to run command against multiple lines in file that start with ica-tcp

    - by Nick Parsells
    I want to run a command on each line of a file I have, however its a bit more complicated then I originally thought. The file contents look like this typically; however there are sometimes more connections: SESSIONNAME USERNAME ID STATE TYPE DEVICE services 0 Disc console 1 Conn t-rpal 48 Disc ica-tcp#0 bpofiretest 50 Active wdica rdp-tcp#2 a-nparsells 51 Active rdpwd ica-tcp 65536 Listen rdp-tcp 65537 Listen The command I want to run is reset session ica-tcp#0. I also want to run the same command on any additional connections that start with ica-tcp that the scripts finds in the file. How can I write a script like that in powershell? thanks!

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • When a python process is killed on OSX, why doesn't it kill the child processes?

    - by Hugh
    I found myself getting very confused a while back by some changes that I found when moving Python scripts from Linux over to OSX... On Linux, if a python script has called os.system(), and the calling process is killed, the called process will be killed at the same time. On OSX, however, if the main process is killed, anything that it launched is left behind. Is there something somewhere in OSX/Python where I can change this behaviour? This is causing problems on our render farm, where the processes can be killed from the management GUI, but the top level process is really just a wrapper, so, while the render farm management might think that the process has gone and the machine is freed up for another task, the actual processor-intensive task is still running, which can lead to huge blockages. I know that I could write more logic to catch the kill signal and pass it on to the child processes, but I was hoping that it might be something that could be enabled at a lower level.

    Read the article

  • What permissions / ownership to set on PHP Sessions Folder when running FastCGI / PHP-FPM (as user "nobody")?

    - by Professor Frink
    I'm having trouble getting a number of scripts running because PHP-FPM can't write to my session folder: "2009/10/01 23:54:07 [error] 17830#0: *24 FastCGI sent in stderr: "PHP Warning: Unknown: open(/var/lib/php/session/sess_cskfq4godj4ka2a637i5lq41o5, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0" while reading upstream" Obviously this is a permission issue; my session folder's owner/group is the webserver's user, NGINX. PHP-FPM runs as nobody though, and hence adding it to the nginx group is not so trivial. A temporary solution is to set the permissions of /var/lib/php/session to 777 - I have a feeling that's not the "best practice" though. What is the best practice when you need to assign a daemon write access to a folder, but it is running as nobody ?

    Read the article

  • Recursively resize images from one directory tree to another?

    - by davr
    I have a large complex directory tree full of JPG images. I would like to create a second directory tree that exactly mirrors the first, but resizing all the images down to a set size (say 2000x1500 or something) and quality (perhaps 85%). Is there any tool that would allow me to easily do this on Windows? I could write some scripts to automate it with bash and image magick, but first want to see if it's already been done. Faster is better too, as I have thousands of images. So something like Photoshop is probably not a good solution as it might take a couple of seconds per image.

    Read the article

  • How do I create a Launcher in Ubuntu 9.10 that runs a shell script?

    - by mkelley33
    Here's my situation: New to Ubuntu (just installed 9.10 Karmic Koala 64 bit) Purpose: to easily run PyCharm without too much typing (ie. cd... ./pycharm.sh) Want to create desktop Launcher instead of terminal & typing (without resorting to the "Run in Terminal" option) Tried to create Launcher to executes .sh script in Document directory Right-clicked Desktop Create Launcher a. Type == Application; Browse [insert absolute path to .sh script]; no luck b. Type == Application in Terminal; Browse ...ditto I'm open to any other alternatives that involve as little typing as possible. I would like to just start Ubuntu, click Launcher icons, and have terminals spring to life, running the intended scripts. Crazy? No. Lazy? Probably. Productive? Hopefully :)

    Read the article

  • How to share files between cPanel accounts?

    - by Darren
    I am setting up a multi-site/multi-store Magento installation, and I want each site to have its own cPanel account so I can setup the SSL and dedicated IP properly. I have tried to create a linux group called 'magento' and changed the files I need to share to that group (even added the users to that group), however when I try to access files through my scripts on those accounts it doesn't acknowledge the files exist. I first made a soft symbolic link which didn't work and then including them to their real location but it didn't work. Am I missing a step in allowing which users can access which files? I added the users to the magento group and like I said changed the group of the files I need to share to them but it's still not working. Thanks, Darren

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • Robustly disabling specific cron.{hourly,daily,weekly} script

    - by benizi
    On various systems that I administer, there are cron scripts that get run via the commonly-used /etc/cron.{hourly,daily,weekly} layout. What I want to know is whether there's any common 'disable this script' functionality. Obviously, simply deleting something out of a given directory will disable it, but I'm looking for a more permanent solution. Deleting /etc/cron.daily/slocate will work to disable the nightly updatedb on my home machine (where I never use slocate), but next time I upgrade the slocate package, I'm pretty sure it'll reappear. The two distributions I'm most interested in are Gentoo and OpenSUSE, but I'm hoping there's a widely-implemented mechanism. Both distros as I have them use vixie-cron (not sure it matters).

    Read the article

  • udhcpc doesn't assign ip address

    - by Diab
    i have a board running linux 2.6.28 and i have one Ethernet interface (eth0) i want dhcp to assign dynamic ip to this interface. i have busybox with udhcpc in the file system and the kernel has the "Pack Socket" enabled so i copied the scripts from "busybox-1.14.1/examples/udhcp" to my board on "/etc/udhcpc/" (i created this directory) and when i run : ifconfig eth0 up the interface is up but without ip address, then running udhcpc -i eth0 -s /etc/udhcpc/sample.script i get the following: note : sample.script contains : "exec /etc/udhcpc/sample.$1" # udhcpc -i eth0 -s /etc/udhcpc/sample.script udhcpc (v1.14.1) started Sending discover... Sending select for 192.168.10.198... Lease of 192.168.10.198 obtained, lease time 691200 but when i check with ifconfig i can see that it didn't assign the ip address to eth0. anyone have an idea why udhcpc didn't assign the ip ? Thanx

    Read the article

  • Redirecting to a diferent exe for download based on user agent

    - by Ra
    I own a Linux-Apache site where I host exe files for download. Now, when a user clicks this link to my site (published on another site): http://mysite.com/downloads/file.exe I need to dynamically check their user agent and redirect them to either http://mysite.com/downloads/file-1.exe or http://mysite.com/downloads/file-2.exe It seems to me that I have to options: Put a .htaccess file stating that .exe files should be considered to be scripts. Then write a script that checks the user agent and redirects to a real exe placed in another folder. Call this script file.exe. Use Apache mod-rewrite to point file.exe to redirect.php. Which of these is better? Any other considerations? Thanks.

    Read the article

  • System information shown when booting Debian

    - by WebDevHobo
    When booting Debian, you'll see it printing a lot of information about the system variables and such. I don't really need to see all that, so I'd like to modify some scripts to make sure that on boot, it just does what it has to do, without printing it on the screen. Just something I fancy. Offcourse, still seeing errors would be nice. But that long slur of text, I could do without. I've tried looking it up, but I can't find documentation on this specific thing anywhere.

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Which tool / technology: System management for databases and dependent services

    - by Filburt
    I'm looking for advice on how to enable our team to take down and re-start our company systems for maintainance purposes. The scenario includes several Oracle db machines several MS SQL Server machines with multiple instances windows services (IIS etc.) BizTalk EAI solution Apache and Tomcat instances lots of scheduled tasks on win2003 and win2008 machines (physical and virtual). The main focus is on capture all dependencies between said databases and services and tasks connecting to them. At the moment an enterprise class solution is not an option. We are considering developing a solution driven by PowerShell scripts but I hope for some more suggestions.

    Read the article

  • Script for run script

    - by user31568
    Hello everyone. There is script: Dim WSHShell, WinDir, Value, wshProcEnv, fso, Spath Set WSHShell = CreateObject("WScript.Shell") Dim objFSO, objFileCopy Dim strFilePath, strDestination Const OverwriteExisting = True Set objFSO = CreateObject("Scripting.FileSystemObject") Set windir = objFSO.getspecialfolder(0) objFSO.CopyFile "\dv.rt.ru\SYSVOL\DV.RT.RU\scripts\shutdown.vbs", windir&"\", OverwriteExisting strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\" _ & strComputer & "\root\cimv2") JobID = "1" Set colScheduledJobs = objWMIService.ExecQuery _ ("Select * from Win32_ScheduledJob") For Each objJob in colScheduledJobs objJob.Delete Next Set objNewJob = objWMIService.Get("Win32_ScheduledJob") errJobCreate = objNewJob.Create _ (windir & "\shutdown.vbs", "**093000.000000+660", _ True, 1 OR 2 OR 4 OR 8 OR 16 OR 32 OR 64, ,True, JobId) How make that shutdown.vbs run not at 9:00 once, but run for 9:00 to 12:00 Thanks

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >