Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 284/924 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Where does Picasa store albums?

    - by Dan
    For people searching, the question might also be phrased: How do I restore Picasa albums from backup? When I reinstalled my computer and restored my photos from backup, some of my albums showed up, but many didn't. I've found the following info: Picasa on Windows stores (stored?) album info in these places: Vista: C:\Users\<myaccount>\AppData\Local\Google\Picasa2Albums\ XP: C:\Documents and Settings\<myaccount>\Local Settings\Application Data\google\Picasa2Albums\ I restored that folder and was still missing many of my albums. That folder also contained a folder of backups, but the most recent one was from a long time ago and I've created albums since then. According to https://support.google.com/picasa/bin/picasa.google.com/support/bin/static.py?hl=en&page=release_notes.cs, since the Dec 8, 2011 build, Picasa saves album info in .ini file(s). This probably explains the albums that I do see. http://katelharrison.blogspot.com/2012/01/how-to-restore-picasa-albums-mac.html has some great info on restoring albums on Macs, but the folder structure seems to be different there than on Windows.

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • X forwarding over SSH from Mac to a Linux box

    - by Checkers
    I need run Mac applications on a remote Mac machine and display it on a local Linux machine's X server (a lot of articles on the Internet seem to be detailing how would you do it the opposite way). $ ssh -X mac-box $ cd /Developer/Applications/Xcode.app $ ./Contents/MacOS/Xcode Sat Oct 3 20:41:26 mac-box.local Xcode[15634] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. ^C My $DISPLAY variable appears to be empty. What should it look like so that forwarding works correctly? Can I run OSX applications this way at all?

    Read the article

  • Missing eth0 configuration file

    - by Godric Seer
    I have two servers both running Scientific Linux 6 on the same network. Since I want SSH access to both of them, I want to give them both static IPs so I can setup port forwarding and not worry how my router assigns local IPs. I found that I need to edit the configuration file /etc/network-scripts/ifcng-eth0, however that file does not exist. The network card works fine, and I am able to ssh as long as I access the router and find the local ip. Can I simply make my own configuration file, or did I miss some step in configuring the system that I need to complete?

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Conditional styles and templates with RadGridView for Silverlight and WPF

    Im happy to announce that with our upcoming Q1 2010 Service Pack 1 (middle of April) you will be able to apply conditionally styles and templates for RadGridView easily using DataTemplateSelectors and StyleSelectors for both Silverlight and WPF: You can test the new functionally with our upcoming latest internal build this Friday and in the meantime here is an example: XAML <Grid x:Name="LayoutRoot"> <Grid.Resources> <local:MyStyleSelector x:Key="styleSelector" /> <local:MyDataTemplateSelector x:Key="templateSelector" /> </Grid.Resources> <telerik:RadGridView AutoGenerateColumns="False" ItemsSource="{Binding}" RowStyleSelector="{StaticResource styleSelector}"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn DataMemberBinding="{Binding ID}" CellTemplateSelector="{StaticResource templateSelector}" /> </telerik:RadGridView.Columns> </telerik:RadGridView></Grid>     C# public class MyStyleSelector : StyleSelector{ public override ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • No package rrdtool-perl available

    - by Pentium10
    On a CentOS release 5.10 (Final) I am trying to install rrdtool to get RRDs.pm but I have no luck. yum install rrdtool-perl Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * rpmforge: mirror.team-cymru.org Excluding Packages in global exclude list Finished Setting up Install Process No package rrdtool-perl available. Nothing to do I tried also librrds-perl but that was not found either. 2. I tried: yum whatprovides "*/RRDs.pm" Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * rpmforge: mirror.team-cymru.org Excluding Packages in global exclude list Finished cpanel-perl-514-Log-Log4perl-1.37-1.cp1136.x86_64 : CPAN module - Log4j implementation for Perl Repo : installed Matched from: Filename : /usr/local/cpanel/3rdparty/perl/514/lib64/perl5/cpanel_lib/Log/Log4perl/Appender/RRDs.pm cpanel-perl-514-RRDs-v1.4.7-1.cp1136.x86_64 : CPAN module - unknown Repo : installed Matched from: Filename : /usr/local/cpanel/3rdparty/perl/514/lib64/perl5/cpanel_lib/x86_64-linux-64int/RRDs.pm then I tried installing but I got: No package cpanel-perl available and the variants (tried with full name, tried both repos listed)

    Read the article

  • Windows 7 Group Policy blocking Adobe Reader

    - by Danny Chia
    A few weeks ago, my company blocked Adobe Reader due to an unpatched security issue. However, we recently moved one of our computers to a project that didn't require access to the corporate network, and IT gave us the green light to override Group Policy and re-enable Adobe Reader. However, this is something we've been unable to achieve. We've tried the following (in no particular order), all to no avail: Ran the program as administrator Renamed the program (the blocking is likely signature-based) Deleted registry.pol Changed the value of "Start" in \HKEY_LOCAL_MACHINE\CurrentControlSet\services\gpsvc to "4" (to prevent group policy from applying, even though it's no longer on the corporate domain) Checked SRP settings under Local Security Policy - nothing was there Checked AppLocker settings under Local Security Policy - nothing there either Incidentally, I found a few registry keys with descriptions referring to Adobe Reader being blocked. I deleted all of them, but it didn't help. Changed the permission settings of the program Re-installed Adobe Reader Is there anything I missed, short of doing a clean install?

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Remote File Copy - Win Server 2008

    - by Scott
    I'd like to copy backup archives from a remote server to my client machine. In the past, I've installed an FTP server on the remote machine and directed local server backups to dump into that directory. I'd then FTP in from my client machine. Just wondering if there is a simpler way to do this using Win 7 (Client) Win Server 2008? Robocopy? RDC command line options? For example, I can easily remote desktop in and drag the files from the server to my local machine. If there is an easy command line way to do this, then I don't have to setup an FTP server which is ideal. Thanks.

    Read the article

  • cannot add a user to sysadmin role in SQL Server

    - by George2
    I am using SQL Server 2008 Management Studio. The current logon account belongs to machine local administrator group. I am using Windows Integrated Security mode in SQL Server 2008. My issue is, after log into SQL Server Management Studio, I select my login name under Security/Logins, then select Server Roles Tab, then select the last item -- sysadmin to make myself belong to this group/role, but it says I do not have enough permission. Any ideas what is wrong? I think local administrator should be able to do anything. :-)

    Read the article

  • MOSS 2007 authentication

    - by Dante
    Hi, I have a MOSS web site configured with Windows Integrated Authentication. I added a couple of local users in the server, added them to Sharepoint groups, and I can log into my site (as long as the local user is part of the administrators group... odd). If I add a domain user to the Owners group, I can't access the site with it. Anybody knows what must be done to open access to domain users in a site configured with Windows Authentication or Basic Authentication? Thanks in advance

    Read the article

  • Domain connection shows as "unauthenticated"

    - by gareth89
    I have seen various different questions for this problem floating around but either the circumstances arent the same or the solution doesnt work so thought i would post it to see if anybody has any suggestions. Various domain PCs and laptops appear to randomly give the connection name of "lewis.local 2(Unauthenticated)" - lewis.local being our domain - and provides an exclamation mark where the network type logo is normally shown. This also appears to happen every time connecting via vpn. Our setup is: 2 servers both running windows server 2003 R2 (x32) main server has AD, DNS and DHCP installed IPv4 on approx 30 client machines (some wired, some wireless) If anybody has any thoughts on solutions i would appreciate it. I have tried removing all but AD server roles, resetting all of the systems and nothing. It doesnt prevent anything from working just like a domain connection most of the time however it is getting fustrating! Also dont know if it could have anything to do with it but the DHCP server seems to have quite a long lead time on issuing the IP address to the client.

    Read the article

  • Ubuntu to Ubuntu VNC over SSH tunnel

    - by rxt
    I have a Linux Ubuntu desktop at home, ssh enabled, vnc server installed, router rule configured. It all works, and at home I can connect via the local network from my Mac. From the outside I can login via ssh. I've configured putty as follows: session: host name and port number connection ssh tunnel: forwarded ports: L5900|192.168.0.23 the local address is: 192.168.1.45 When I make the connection I can login to the remote machine. Then I open Remote Desktop Viewer. I click connect protocol: vnc host: ? use host as ssh tunnel: ? I don't know what to use for the last two options. Which ip-addresses should I use?

    Read the article

  • Best approach for a clinic database

    - by user18013
    As a practical assignment for the database course I'm taking I've been instructed to create a database for a local clinic, I've meet with the doctors a couple of times and discussed the information that needs to be stored in the database from personal to medical. Now I'm facing a tough decision because I've been given two choices: either to implement the database as a "local website" which only operates inside the clinic via WiFi, or to implement the front-end as a regular desktop application connecting to a shared database. Note: I've a 40 days deadline to deliver the first prototype and meet with my client. My questions are: 1- which approach should I go with given that I've more experience with desktop applications programming than web? 2- if I go with desktop front-ends what would be the best way to synchronize the database between all clients?? I've no experience and having searched for an answer a lot but came up with nothing detailed on this matter. 3- if I go with the web solution which choice would be best PHP & MySQL or ASP.NET & SQL Server or a different combination?? (given that my knowledge in both PHP & ASP.NET are nearly the same).

    Read the article

  • Rsyslog mail module not working

    - by Henry-Nicolas Tourneur
    I would like to email snort alerts from my Debian Lenny fw. Syslog is sending log messages from the firewalls to a central rsyslog. On my central rsyslog, I got something like : $ModLoad ommail $ActionMailSMTPServer server.company.local $ActionMailFrom rsyslog@company.local $ActionMailTo [email protected] $ActionExecOnlyOnceEveryInterval 1 $template mailSubject,"[SNORT] Alert from %hostname%" $template mailBody,"Snort message\r\nmsg='%msg%'" $ActionMailSubject mailSubject if $msg regexp 'snort[[0-9]]: [[0-9]:[0-9]:[0-9]].*' then ommail:;mailBody But I doesn't get any mails, I even can trigger snort with something like ping -s 1400, it logs things like following but still no mail ! 2010-01-08T09:25:58+00:00 Hostname snort[4429]: [1:499:4] ICMP Large ICMP Packet [Classification: Potentially Bad Traffic] [Priority: 2]: {ICMP} ip_dest - ip_src Any idea ?

    Read the article

  • Track IP Messenger's chatting by wireshark

    - by Kumar P
    We have Linux server ( RHEL 5 ), and some client machines ( Windows XP ) in local area network. We using server as proxy server. I am using squid proxy. My windows machines using internet by proxy. Now my client machines using IP messenger for chatting and sharing files with in local network. How can i trace what they are doing or chatting by ip messenger, from my server by wireshark packet sniffer ? If i can't do it by wireshark , What will you give idea about it...

    Read the article

  • How to enable services Discovery API in GoogleCL?

    - by Marcos
    There are bits and pieces of information all over the place but I'm trying to put it all together so that GoogleCL finally accesses more than the initial 7 services. Does anyone know of a step-by-step? Right now any attempt outside these result in the error message: google tasks list Did you specify the service correctly? Must be one of 'picasa', 'blogger', 'youtube', 'docs', 'contacts', 'calendar', 'finance' I installed GoogleCL from the Ubuntu repos, authenticated a few bundled services like contacts, docs etc. and those work great, giving me access to do certain operations like upload from the command line. I would really like to get it going to support tasks and all the other elegible Google services shown at https://code.google.com/apis/explorer/#_s=tasks Here are some guides/partial steps I've found: http://code.google.com/p/googlecl/wiki/DiscoveryManual (indicates needing to check it out updated GoogleCL from the subversion repository.) http://code.google.com/p/google-api-python-client/wiki/Installation easy_install --upgrade google-api-python-client http://code.google.com/p/googlecl/wiki/Install http://code.google.com/p/googlecl/source/checkout sudo -i cd /usr/local/src/ svn checkout http://googlecl.googlecode.com/svn/trunk/ googlecl-read-only cat googlecl-read-only/INSTALL.txt cd /usr/local/src/googlecl-read-only/ python setup.py install Result: $ google discovery list Traceback (most recent call last): File "/usr/bin/google", line 488, in run_interactive run_once(options, args) File "/usr/bin/google", line 540, in run_once options.config) File "/usr/bin/google", line 364, in import_service force_gdata_v1 = config.lazy_get(package.SECTION_HEADER, AttributeError: 'module' object has no attribute 'SECTION_HEADER'

    Read the article

  • Using FTP to update files on a server

    - by Neville
    I know the FTP username and password for a site we own and need to know how we can update some files on the server. It seems quite a small thing to do and I'd like to have a go at doing it myself. A few years ago a friendly local guy help set up a website for my wife's floristry business. The site has a "contact us" page, and messages are forwarded to our home email address. We've now just changed our home email, and so I now need to reset the forwarding function on the website. The helpful local guy seems to have moved away, or retired - there's no way I can find him now. I tried to get help on how to change the forwarding address from the hosting people, but they say they can't help me. How do I go about updating the pages on the site? A step-by-step guide on how to do it would be great.

    Read the article

  • Upgraded from VS 2008 -> VS 2010. Can't Connect to SQL Server in Staging Environment

    - by Bob Kaufman
    I have a test application written in C#/ASP.NET that I've developed using Visual Studio 2008 Professional/.NET 3.5 which connects to a local SQL Server 2008 Express instance. I upgraded the development machine to Visual Studio 2010 Professional maintaining .NET 3.5 and everything in the development environment continues to work correctly. Upon deployment of the new app to an internal staging machine, that app cannot connect to its local SQL Server 2008 Express database. I get the customary "server not found" error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible... Does something need to be upgraded on the staging machine to be able to host a Visual Studio 2010/.NET 3.5 application?

    Read the article

  • zip being too nice (osX)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi this all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • Why does my name resolution hit the DNS even with a hosts file entry?

    - by Volomike
    I'm running Ubuntu 10.04.2 LTS Desktop. Being a web developer, naturally I created a "me.com" in my /etc/hosts file. Unfortunately, my name resolution is going out to the DNS before first checking my local hosts entry and I can't figure out why. The end result is that if my /etc/resolv.conf contains "nameserver 127.0.0.1" in there first, then I get a response back in my web browser from me.com (local) within less than a second. But if I don't have that entry, then my response takes sometimes as much as 5 seconds if my ISP is a little slow. The problem was so troublesome that I actually had to file a question here (and someone resolved it) for how to automatically insert that entry into /etc/resolv.conf. But one of the users (@shellaholic) here highly recommended (and commented back and forth with me about it) that I should file this question. Do you know why my workstation's name resolution has to hit the DNS server first before hitting my /etc/hosts file entry? For now, I'm using the resolv.conf trick (see link above).

    Read the article

  • Configure VPN to access remote LAN network on Windows7

    - by PiotrK
    Situation: I have two Windows7 machines (PC and laptop). I've set PC as VPN server and laptop as VPN client using default built-in W7 network tools. I've disabled use default gateway in remote network on client machine, so client don't try to route all communication through VPN. I've routed port 1723 (TCP/UDP) on NAT to my server and enabled IPSec/PPTP/L2TP passthrough I've put my laptop in indepedent network (basically I've connected it via 3G network), connected to VPN server and checked ipconfig /all I've get: IP Address: 192.168.1.101 Mask: 255.255.255.255 Gateway: (none) LAN mask in server LAN network is 255.255.255.0 - I am surely missing something obvious, but Google doesn't give me any good advices; How can I access local LAN network from remote VPN client? How can I access local shared documents?

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >