Search Results

Search found 10107 results on 405 pages for 'remote backups'.

Page 323/405 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • problem with date returning wrong day although the timestamp is correct!

    - by Spiros
    I have a bizzare problem with php date function. code: $numDays = 8; $date = strtotime('2010-11-06'); for ($i=1; $i<=$numDays; $i++) { $thisDay = date("D, d M Y", $date); print ($thisDay.'<br>'); $date+=86400; // add one day to timestamp } result on my server (local host, windows): Sat, 06 Nov 2010 Sun, 07 Nov 2010 Mon, 08 Nov 2010 Tue, 09 Nov 2010 Wed, 10 Nov 2010 Thu, 11 Nov 2010 Fri, 12 Nov 2010 Sat, 13 Nov 2010 Result on my web server (linux) Sat, 06 Nov 2010 *Sun, 07 Nov 2010 Sun, 07 Nov 2010* Mon, 08 Nov 2010 Tue, 09 Nov 2010 Wed, 10 Nov 2010 Thu, 11 Nov 2010 Fri, 12 Nov 2010 Notice how Sun, 07 Nov 2010 appears twice on the remote server?? Why is this happening? can anyone explain this Behavior?

    Read the article

  • Git Pull works; Git push fails

    - by Michael
    I thought I set up my key pairs correctly -- I can do git pulls. I can do git commits. But when I do a git push, it counts objects, decompresses, then says: fatal: the remote end hung up unexpectedly. What's the issue here? I'm a super user, so it's not folder writable / readable access problems -- it must be the way I set up the encryption key pair... how do I debug this ... since git pull works?

    Read the article

  • Git to SVN trouble

    - by Kevin
    My boss has a Perforce repository for which he wants to make a read-only copy available on Sourceforge via subversion. He had a perl script which would do this but it's no longer functioning (we don't want to try debugging it yet) and it's really not that great anyway. So an alternate solution is to pull the perforce repo into git as a remote ref, which I have already done successfully (including all the proper commit details and authors), now the trouble I'm having is pushing it out to a separate SVN repository. I can make it start the commit process with "git svn dcommit --add-author-from", but the problem is even though the correct author appears at the end of the commit message the "real" author committing is my machine's user. I want to preserve the real author with the commit, and I'd also like to preserve the original timestamps as well. Is anyone familiar with how I could accomplish this?

    Read the article

  • Jquuery Tabs Cookiess

    - by user342391
    I am trying to use the Jquery cookie plugin to remember the last selected tab. I can't seem to get it to work.Do i need anything else apart from the jquery lib and cookie plugin??? This is the code: <script type="text/javascript"> $(document).ready(function() { $("#tabletabscampaigns > ul").tabs({ remote: true, cache: true }); $("#tabletabscampaigns").tabs({selected: 0, cookie: { expires: 30} }); }); </script> <div id="tabletabscampaigns" style="float:left; width:895px; margin-top:20px;">

    Read the article

  • How can I "git log" only code published to trunk?

    - by Russell Silva
    At my workplace we have a "master" trunk branch that represents published code. To make a change, I check out a working copy, create a topic branch, commit to the topic branch, merge the topic branch into master, and push. For small changes, I might commit directly to master, then push. My problem is that when I use "git log", I don't care about my topic branches in my local working copy. I only want to see the changes to the master branch on the remote, shared git server. What's more, if I use --stat or -p or one of their friends, I want to see the files and changes associated with the merge commit to master, not associated to their original branch commits (which, like I said, I don't want to see at all). How do I go about doing this?

    Read the article

  • Web services Authentication Jungle

    - by redben
    I have been doing some research lately about best approaches to authenticating web services calls (REST SOAP or whatever). But none of the Approaches convinced me... But i still can't a make a choise... Some talk about SSL and http basic authentication -login/password- which just seems weird for a machine (i mean having to assign a login/password to a machine, or is it not ?). Some others say API keys (seems like these scheme is more used for tracking and not realy for securing). Some say tokens (like session IDs) but shouldn't we stay stateless (especially if in REST style) ? In my use case, when a remote app is calling one of our web services, i have to authenticate the calling application obviously, and the call must - if applicable - tell me which user it impersonates so i can deal with authorization later. Any thoughts ?

    Read the article

  • PayPal Fetch Token

    - by arik-so
    Hello, I am using the PayPal API for Express Checkout Integration. Upon setting the Express Checkout, one gets to a page with a token, like this page: https://api-3t.sandbox.paypal.com/nvp The token looks more or less like that ACK=Failure&L_ERRORCODE0=81002&L_SHORTMESSAGE0=Unspecified%20Method&L_LONGMESSAGE0=Method%20Specified%20is%20not%20Supported&L_SEVERITYCODE0=Error My Question is: how do I fetch this toke by means of PHP? I do not want to be redirected to that page beforehand. How do I just fetch the contents of a remote file after passing certain post parameters? Thanks in advance!

    Read the article

  • Why can't I save CSS changes in FireBug?

    - by Dean
    FireBug is the most convenient tool I've found for editing CSS - so why isn't there a simple "Save" option? I am always finding myself making tweaks in FireBug, then going back to my original .css file and replicating the tweaks. Has anyone come up with a better solution? EDIT: I'm aware the code is stored on a server (in most cases not my own), but I use it when building my own websites. Firebug's just using the .css file FireFox downloaded from the server, it knows precisely what lines in which files its editing, I can't see why there's not an "Export" or "Save" option which allows you to store the new .css file. (Which I could then replace the remote one with). I have tried looking in temporary locations, and choosing file-save and experimenting with the output options on FireFox, but I still haven't found a way. Here's hoping someone has a nice solution... EDIT 2: The Official Group has a lot of questions, but no answers.

    Read the article

  • Simulating a mouse button click in Windows

    - by Ncarlson
    Hi everyone, I'm writing Remote Desktop clone in C++ using QT. So far I'm able to move the mouse cursor around fine. QT has a nice setPos function for that. However, I'm a bit lost as to what API/Library to use for simulating mouse button clicks. One method I'm aware of is to send the WM_(event) to a window using the window's HWND. However, I was hoping there was a more salient method for taking complete control over a mouse. Is there any other way to tell the operating system that the left mouse button has been clicked? Thanks.

    Read the article

  • OCIError (ruby on rails)

    - by swingfuture
    I am using rails freeze 1.2.3 to run a rails app. Because the app is on a remote machine, I used ssh tunnel (ssh -l -L) to show the app on my screen. When I ran it, it correctly prompted the login page, after I put in the info, I got this error: OCIError in ServiceController Error while trying to retrieve text for error ORA-12154 I have tried the same app on a different machine w/o using freeze (because that machine has rails version 1.2.3 while current one has 2.0.2). Is that where the error comes from? Thanks.

    Read the article

  • validates_uniqueness_of...limiting scope - How do I restrict someone from creating a certain number

    - by bgadoci
    I have the following code: class Like < ActiveRecord::Base belongs_to :site validates_uniqueness_of :ip_address, :scope => [:site_id] end Which limits a person from "liking" a site more than one time based on a remote ip request. Essentially when someone "likes" a site, a record is created in the Likes table and I use a hidden field to request and pass their ip address to the :ip_address column in the like table. With the above code I am limiting the user to one "like" per their ip address. I would like to limit this to a certain number for instance 10. My initial thought was do something like this: validates_uniqueness_of :ip_address, :scope => [:site_id, :limit => 10] But that doesn't seem to work. Is there a simple syntax here that will allow me to do such a thing?

    Read the article

  • Get CruiseControl to talk to github with the correct public key.

    - by Danny Lister
    Hi All, Has anybody installed git and ControlControl and got CruiseControl to pull from GitHub on a window 2003 server. I keep getting public key errors (access denied) - Which is good i suppose as that confirms git is talking to github. However what is not good is that I dont not know where to install the rsa keys so they will be picked up by the running process (git in the context of cc.net). Any help would save me a lot of hair! I have tried installing the keys into; c:\Program Files\Git.ssh Whereby running git bash and cd ~ take me to: c:\Program Files\Git Current error from CC.net is Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: Permission denied (publickey). fatal: The remote end hung up unexpectedly . Process command: C:\Program Files\Git\bin\git.exe fetch origin Thanks in advance

    Read the article

  • How do I parse an XML file that's on a different web server?

    - by Tim
    I have a list of training dates saved into an XML file, and I have a little javascript file that parses all of the training dates and spits them out into a neatly formatted page. This solution was fine until we decided that we wanted another web-page on another sever to access the same XML file. Since I cannot use JavaScript to parse an XML file that's located on another server, I figured I'd just use an ASP script. However, when I run this following, I get a response that there are 0 nodes matching a value which should have several: <% Dim URL, objXML URL = "http://www.site.com/feed.xml" Set objXML = Server.CreateObject("MSXML2.DOMDocument.3.0") objXML.setProperty "ServerHTTPRequest", True objXML.async = False objXML.Load(URL) If objXML.parseError.errorCode <> 0 Then Response.Write(objXML.parseError.reason) Response.Write(objXML.parseError.errorCode) End If Response.Write(objXML.getElementsByTagName("era").length) %> My question is two-fold: Is there are a way I can use java-script to parse a remote XML file If not, why doesn't my code give me the proper response?

    Read the article

  • What goes between SQL Server and Client?

    - by worlds-apart89
    This question is an updated version of a previous question I have asked on here. I am new to client-server model with SQL Server as the relational database. I have read that public access to SQL Server is not secure. If direct access to the database is not a good practice, then what kind of layer should be placed between the server and the client? Note that I have a desktop application that will serve as the client and a remote SQL Server database that will provide data to the client. The client will input their username and password in order to see their data. I have heard of terms like VPN, ISA, TMG, Terminal Services, proxy server, and so on. I need a fast and secure n-tier architecture. P.S. I have heard of web services in front of the database. Can I use WCF to retrieve, update, insert data? Would it be a good approach in terms of security and performance?

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • How to use FTP's APPEND command in a script?

    - by btelles
    Hi there, For some reason when I try to use "append" while inside an FTP script, the ftp client appears to hang. I've tried all sorts of different variations (for example, including the destination filee and not, using quotes and not), and all I ever get is a "No such file or directory" error (and I KNOW it's there) or it hangs on an 200 Request OK and never does anything. ftp> open ibm.some_server Connected to ibm.some_server 230 USER1 is logged on. Working directory is "USER1.". Remote system type is MVS. ftp> cd 'Z.TABS.' 250 "Z.TABS." is the working directory name prefix. ftp> append 'SAMASCPY' 'SAMASCPY': No such file or directory ftp> append SAMASCPY 200 Port request OK. Anyone know what could be going on?

    Read the article

  • simple EJB jar deployed in jboss with its own log4j configuration

    - by user309281
    Hi All I have a simple EJB jar with a stateless session bean, deployed in JBOSS AS 4.2.2, unde r/server/default/deploy. The bean is registered under JNDI tree as viewed from jboss jmx console and I am able to access it through a remote java client outside JBOSS. Inside EJB jar, I have added some logging to be written to a separate log file, using apache log4j jar and log4j.xml. But I am not able to view any of the logs. Also I do not wish to use jboss-log4j.xml, since there will be many other EJBs to be deployed and wish to have separate log4j for each EJB application. Here is my one of the EJB-jar contents: EJB_DS.jar: log4j.xml classes apache log4j jar is added to /server/default/lib path. Kindly highlight if i have missed any points for enabling log4j configuration With Regards, Krishna

    Read the article

  • What does it mean when git pull causes a conflict but git pull --rebase doesn't?

    - by Jason Baker
    I'm pulling from a repository that only I have access to. As far as I know, I've only pushed to it from one repository. A couple of times, I've pulled from it and gotten this: To [email protected]:tsched_dev.git ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to '[email protected]:tsched_dev.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. Generally, that just means that I have to do a git pull (although all the changes should be fast-forwardable). When I do a git pull, I get conflicts. If I do a git pull --rebase, it works fine. What am I doing wrong?

    Read the article

  • killing a separate thread having a socket

    - by user311906
    Hi All I have a separate thread ListenerThread having a socket listening to info broadcasted by some remote server. This is created at the constructor of one class I need to develop. Because of requirements, once the separate thread is started I need to avoid any blocking function on the main thread. Once it comes to the point of calling the destructor of my class I cannot perform a join on the listener thread so the only thing I can do is to KILL it. My questions are: what happens to the network resoruces allocated by the function passed to the thead? Is the socket closed properly or there might be something pending? ( most worried about this ) is this procedure fast enough i.e. is the thread killed so that interrupt immediately ? I am working with Linux ...what command or what can I check to ensure that there is no networking resource left pending or that something went wrong for the Operating system I thank you very much for your help Regards MNSTN NOTE: I am using boost::thread in C++

    Read the article

  • Need help using the Windows IP Helper API & ParseNetworString in C#.

    - by JohnnyNoir
    I'm attempting to rewrite some C# web service code that uses the Windows IP Helper API call "SendARP" to retreive a remote system's MAC address. SendARP works great - until you cross a network segment as Arp requests aren't routed. I've been looking at the "ParseNetworkString" documentation after happening across its existance on StackOverflow. The quick & dirty algorithm I have in mind is: public static string GetMacAddress(string strHostOrIP) { if (strHostOrIP is IPAddress) { parse results of nbstat -A strHostOrIP return macAddress } if (strHostOrIP is Hostname) { IPHostEntry hostEntry = null; try { hostEntry = Dns.GetHostEntry(strHostOrIP); } catch { return null; } if (hostEntry.AddressList.Length == 0) { return null; } foreach (IPAddress ip in hostEntry.AddressList) { if (ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork) { ipAddress = ip; break; } } } return GetMACAddress(ipAddress); } "ParseNetworkString" is new with Vista/Win2008 so I haven't been able to find example C# code demonstrating its use and I'm no C++ coder so if anyone can point me in the right direction...

    Read the article

  • Receiving a File via RFCOMM on Android

    - by poeschlorn
    Hey guys, does someone know how to receive a file on android via RFCOMM? I'm a newby to bluetooth issues, so please have patience with me. I'm looking for an approach to receive data via RFCOMM as a stream and store it somewhere on my phone. Saving data is not the problem, it works quite fine. The main issue is the implementation of the connection and the reliable retrieval of the data... This whole procedure should be implemented as an android service (so that no activity has to be launched while receiving data). What would you suggest: Local or remote service? greetz, poeschlorn

    Read the article

  • [PHP] Using cURL to download large XML files

    - by ndg
    I'm working with PHP and need to parse a number of fairly large XML files (50-75MB uncompressed). The issue, however, is that these XML files are stored remotely and will need to be downloaded before I can parse them. Having thought about the issue, I think using a system() call in PHP in order to initiate a cURL transfer is probably the best way to avoid timeouts and PHP memory limits. Has anyone done anything like this before? Specifically, what should I pass to cURL to download the remote file and ensure it's saved to a local folder of my choice?

    Read the article

  • Designing a frontend/backend architecture

    - by wrp
    What are some good information sources on designing programs with a client/server architecture? This is for development of a desktop application, not a Web service. The only books I have found on client/server apps deal with the case of a thin client connecting to a remote database. Two good examples of what I mean are Mathematica and SuperCollider. I'm looking for platform- and language-agnostic discussion of the issues in developing a frontend/backend system. Especially useful topics would be allocation of responsibilities and options for message passing.

    Read the article

  • Can you update a file in the application bundle?

    - by ian1971
    Is it possible to update a file stored in an applications bundle programmatically? Basically I want to get a remote file and overwrite one of the bundle files with it (a sqlite database in fact). This works fine on the simulator but on the device it does not work, though it does not error either (it just doesn't seem to actually overwrite). I know I can work around it by copying it do the user folder instead and then getting the code to check their first for the file before using the bundle one but I was interested to know whether it is possible to update a bundle file at all or am I just doing something wrong? Thanks

    Read the article

  • window.onbeforeunload ajax request problem with Chrome

    - by lang2
    Hello, I have a web page that handles remote control of a machine through Ajax. When user navigate away from the page, I'd like to automatically disconnect from the machine. So here is the code: window.onbeforeunload = function () { bas_disconnect_only(); } The disconnection function simply send a HTTP GET request to a PHP server side script, which does the actual work of disconnecting: function bas_disconnect_only() { var xhr = bas_send_request("req=10", function() { } ); } This works fine in FireFox. But with Chrome, the ajax request is not sent at all. There is a unacceptable workaround: adding alert to the callback function: function bas_disconnect_only() { var xhr = bas_send_request("req=10", function() { alert("You're been automatically disconnected."); } ); } After adding the alert call, the request would be sent successfully. But as you can see, it's not really a work around at all. Could somebody tell me if this is achievable with Chrome? What I'm doing looks completely legit to me. Thanks,

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >