Search Results

Search found 3789 results on 152 pages for 'git diff'.

Page 132/152 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • shell script fun! how to perform an action on each subdirectory from a given path??

    - by pocketfullofcheese
    I am writing a shell script (which I suck at) and I need some help. Its a script that is moving things from git to CVS (not important). The thing is, i a file path: controllers/listbuilder/setup/SubmissionRolesListbuilderHandler.inc.php and I need to be able to do: cvs add controllers; cvs add controllers/listbuilder; cvs add controllers/listbuilder/setup; cvs add controllers/listbuilder/setup/SubmissionRolesListbuilderHandler.inc.php Can someone help me out? The best I've come up with so far is to recursively add ALL files in my working tree, but that seems overly inefficient.

    Read the article

  • Best option for using the GData APIs on Android?

    - by nyenyec
    What's the least painful and most size efficient way to use the Google Data APIs in an Android application? After a few quick searches t seems that there is an android-gdata project on Google Code that seems to be the work of a single author. I didn't find any documentation for it and don't even know if it's production ready yet. An older option, the com.google.wireless.gdata package seems to have been removed from the SDK. It's still available in the GIT repository. Before I invest too much time with either approach I'd like to know which is the best supported and least painful.

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • Backup software for incremental swapped-out drives?

    - by user13743
    We're using Acronis Home 11 to backup our main Windows machine at the office. We have a set of portable hard drives that we swap out each week, for redundancy. We have incremental sets ( a new diff of the entire series each night) building on each drive. However, from time to time, Acronis gets confused and sometimes makes a new full backup. This eats up a lot of drive on the disks. Also, I have to trick the Acronis script each time I swap out a drive and point it to the new incremental backup set. Finally, if a drive gets full, there's no way to partition the backup set on a drive. I found this out the hard way, and now one drive is full with one backup set. So now on the other drive, I have three folders of backup sets. When one starts to get full, I delete the oldest one and start a new set. That way one single drive never gets filled up with one single backup set. I'm looking for a backup software that can backup Windows in incremental sets, and doesn't get tripped up with swapped out drives. Is there a better solution?

    Read the article

  • Weird Facebooker Plugin & Pushion Passenger ModRails Production Error

    - by Ranknoodle
    I have an application (Rails 2.3.5) that I'm deploying to production Linux/Apache server using the latest Phushion Passenger/Apache Module 2.2.11 version. After deploying my original application, it returns a 500 error with no logging to production log. So I created a minimal test rails application, with some active record calls to the database to print out a list of objects to the home controller/my index page. I also cleared out all plugins. That works fine in the production environment. Then I one by one introduced each plugin that I'm using one at a time. Every plugin works fine EXCEPT facebooker. Every time I load the facebooker plugin into my app/vendor/plugins directory (via script git etc) my test application break (500 error - no error logging). Everytime I remove the facebooker plugin my test application works. Has anyone seen this before/ have any solutions? I saw this solution but didn't see it in the facebooker code.

    Read the article

  • .apk signing fails even with Sun JDK (java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyP

    - by ianweller
    I'm having an interesting problem signing my Android application, whether or not I'm using a debug key. Regardless of the JDK I have installed to /usr/bin/{java,keytool,jarsigner} (OpenJDK or Sun's JDK) it will always give the following output after compiling successfully: -package-debug-sign: [apkbuilder] Creating RemoteNotify-debug-unaligned.apk and signing it with a debug key... BUILD FAILED /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:281: The following error occurred while executing this line: /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:152: java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyProvider The application was built and signed just fine by Eclipse with the ADT plugin (even without Sun's JDK installed). I'm on Fedora 12. I'm wanting to get my code out of Eclipse and move it into a git repository, but being unable to build it from ant will not allow this to happen.

    Read the article

  • TTRequestLoader always raises "TTDASSERT failed: _cacheKey == request.cacheKey"

    - by Hoang Pham
    I run the TTCatalog application from the Three20 library and encountered this error when click on the "Photo Thumbnails" of the "Three20 Catalog": TTDASSERT failed: _cacheKey == request.cacheKey I look at the breakpoint and see that it is on line 119 of TTRequestLoader.m of the method addRequest. I know that it failed to do the assertion of the cacheKey, but why this error appears even on the sample application, does anyone encounter the same error? What is the workaround of this? Thanks, P/S: I downloaded the Three20 from the git directory just yesterday. So I assume this be the newest one.

    Read the article

  • Problem installing RVM...

    - by Cody
    I have executed the commands as prescribed in the instructions at the rvm website but things don't seem to work.. Fetching the code from the git repository runs smoothly but when I try to use rvm notes Error: /usr/local/bin/rvm: line 73: /home/cody/.rvm/scripts/rvm: No such file or directory flashes in multiple lines and doesn't stop till I hit ctrl+C.. I am running Ubuntu 8.04 and currently I am running ruby 1.9.2.. Sorry, if I am missing out any necessary information. Thanks in advance.

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • shell script fun! performing action on each subdirectory from a given path

    - by pocketfullofcheese
    I am writing a shell script (which I suck at) and I need some help. Its a script that is moving things from git to CVS (not important). The thing is, i a file path: controllers/listbuilder/setup/SubmissionRolesListbuilderHandler.inc.php and I need to be able to do: cvs add controllers; cvs add controllers/listbuilder; cvs add controllers/listbuilder/setup; cvs add controllers/listbuilder/setup/SubmissionRolesListbuilderHandler.inc.php Can someone help me out? The best I've come up with so far is to recursively add ALL files in my working tree, but that seems overly inefficient.

    Read the article

  • Can I get build warnings from a custom build step in Qt Creator?

    - by Derick
    I have the following script that I run as a custom build step in Qt Creator: git ls-files . | egrep "\.cpp$|\.h$" | xargs vera++ Which then gives output: foo/bar.cpp:1: no copyright notice found Another script I also use is: cppcheck . --template gcc -q --enable=style,unusedFunctions With the output: apple.h:8: style: The class 'MyPie' has no constructor. Member variables not initialized. I would love to double-click on the error and go to the source in the Compile Output window. It seems that only gcc errors are detected and these custom ones are ignored even though they have the same format.

    Read the article

  • Test flash notice in layout view spec (rspec2, rails3)

    - by jbpros
    Hi! I'd like to spec the fact that my application layout view prints out flash notices. However the following code does not run, the flash method does not exist in view specs (as opposed to controller specs where it works perfectly): describe 'layouts/application' do it "renders flash notices" do flash[:notice] = "This is a notice!" render response.should contain "This is a notice!" end end Is my code wrong or is it a "not-yet-implemented feature" in Rspec 2? I'm on Rails3 and Rspec2 from its master branch on Git. Thanks!

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • how to write mute logic when mute state is unknown

    - by Delan Azabani
    I'm writing an indicator-sound clone for OSS4. Setting the volume works fine now, but I'm having trouble with the muting aspect of my program. A couple of facts about muting in OSS4: vmix doesn't have a mute (and we use vmix for volume control) also, the 'media keys' way of controlling volume doesn't set a mute control, but rather, volume = 0 The problem with this is, when reading the vmix volume and encountering zero, we don't know if the user has actually set it to zero, or has it set to some other value, but has mute on. How should I write my muting logic? git code, if that helps

    Read the article

  • Is there any reason to use TFS 2010 in a micro ISV?

    - by kyrisu
    Yesterday I was checking VS2010 editions here and I've noticed that with VS10 with MSDN we get TFS2010 with 1 CAL. I'm a micro ISV (basically sole developer, many clients). I just want to save time - did anyone tried it in similar scenario? Are there any features worth looking into for such a small implementation? P.S. Right now I'm using GIT with gitextension - I'm happy with it, but I would like something more integrated with project management and bug tracking so I can show it to my clients when I'm working on their projects.

    Read the article

  • What software development process do you use and how do you implement it?

    - by clyfe
    Post only what you do use not what you would like to use, so we can see what is the most popular in real life. I am interested only in theese issues: Project Model (waterfall, agile...) How are requirements gathered (and stored)? Revision control - what software, what workflow Build automation, what software, where does it fit ? How is the testing done ? How is the documentation done ? How is the quality assurance done ? Please provide short objective answers, don't speak from the books. EXAMPLE: In my company we are a small team of 5 people and we develop webapps using ruby. agile PM cucumber requirements git SCM - Integration Manager Workflow integrity CI rspec automated tests the project lead creats the documentation skeleton then it is filled by the developers ensure quality by peer reviewing code and manual peer-testing

    Read the article

  • Action Controller: Exception - ID not found

    - by Danny McClelland
    Hi Everyone, I am slowly getting the hang of Rails and thanks to a few people I now have a basic grasp of the database relations and associations etc. You can see my previous questions here: http://stackoverflow.com/questions/2714621/rails-database-relationships I have setup my applications models with all of the necessary has_one and has_many :through etc. but when I go to add a kase and choose from a company from the drop down list - it doesnt seem to be assigning the company ID to the kase. You can see a video of the the application and error here: http://screenr.com/BHC You can see a full breakdown of the application and relevant source code at the Git repo here: http://github.com/dannyweb/surveycontrol If anyone could shed some light on my mistake I would be appreciate it very much! Thanks, Danny

    Read the article

  • Rails: restful authentication setup help

    - by SuperString
    Hi I downloaded the plugin from http://github.com/techweenie/restful-authentication.git Then I run rails generate plugin authenticated user session This is the result I got: create vendor/plugins/authenticated create vendor/plugins/authenticated/MIT-LICENSE create vendor/plugins/authenticated/README create vendor/plugins/authenticated/Rakefile create vendor/plugins/authenticated/init.rb create vendor/plugins/authenticated/install.rb create vendor/plugins/authenticated/uninstall.rb create vendor/plugins/authenticated/lib create vendor/plugins/authenticated/lib/authenticated.rb invoke test_unit inside vendor/plugins/authenticated create test create test/authenticated_test.rb create test/test_helper.rb Then I tried to do rake db:migrate But I got error that says rake tasks in restful-authentication/tasks/auth.rake are deprecated. Use lib/tasks instead. I am new to rails, tried looking online but things seem to be outdated. Please help!

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Requirements of an issue/bug tracker

    - by James Brooks
    I've been looking at various issue/bug trackers available on the net. There are some very good ones, but I'm unable to use them as my server does not support Perl/Ruby (for example), I'm not too bothered however because I am able to write code in PHP and as such would prefer something in that language. So I've taken it upon myself to write a custom issue tracker system. As of now it's in early planning stages, and before I continue, I'd like to find out what people need from such an application. My current list of things to add include: Creating/Editing/Deleting issues - both on user and admin level Related issues (similar to that of STO) Admins will be able to create builds/milestones and version control of projects Admins will be able to assign users/groups to a project Roadmap of projects Possible SVN integration with Git? What do you think? There are a couple more things I'd like to add, but I'm sure you'll think of a better way of adding such feature. What would you like to see from an issue tracker?

    Read the article

  • Where can I find the gtk-builder-convert script?

    - by Marty
    I've built a small GUI app for work that uses some .glade files for pop-up windows. Recently, the ground beneath me was shifted - my environment was upgraded. Newer pyGTK versions require GTKBuilder and .xml files instead of Glade and .glade files and now my poor app is broken. I need to convert the .glade file to the newer .xml file. Problem is Glade-3 is not on our system, and I can't find gtk-builder-convert on the web. I've looked at the Gnome GIT Browser, don't know where to start looking or how to search it. Would anyone be kind enough to point me to the gtk-builder-convert python script?

    Read the article

  • Are there such things as Email Hooks?

    - by viatropos
    After hearing about git commit hooks, I was thinking maybe there are such things as email hooks... Is it possible for me to build a program that says "hey, you just received an email, now run this ruby script"? Something like a GMail Web Hook. Is there anything out there like that? I mean I could build a cron thing that checked my email all the time, but maybe there's a more formal way. Looking for an online email system to do this with, not say my Mac Mail.

    Read the article

  • Python not Working in Vim

    - by jdg
    I have a new install of VIM from the automatic windows installer: gvim73_46.exe I have Python 2.7 (32 bit) installed. If I open gvim, and type: :set python? I get E518: Unknown option. If I try typing: :python 'hello' Vim crashes. What could be wrong? Here are the contents of :version in case they are helpful, although python is installed, and it is using Python 2.7. I also checked, and C:\Windows\System32\python27.dll is where it should be... I am really lost here. Does anyone have any ideas as to what is going wrong? VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02) MS-Windows 32-bit GUI version with OLE support Included patches: 1-46 Compiled by Bram@KIBAALE Big version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang -mzscheme +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer -profile +python/dyn +python3/dyn +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save +xpm_w32 system vimrc file: "$VIM\vimrc" user vimrc file: "$HOME_vimrc" 2nd user vimrc file: "$VIM_vimrc" user exrc file: "$HOME_exrc" 2nd user exrc file: "$VIM_exrc" system gvimrc file: "$VIM\gvimrc" user gvimrc file: "$HOME_gvimrc" 2nd user gvimrc file: "$VIM_gvimrc" system menu file: "$VIMRUNTIME\menu.vim" Compilation: cl -c /W3 /nologo -I. -Iproto -DHAVE_PATHDEF -DWIN32 -DFEAT_CSCOPE -DFEAT_NETBEANS_INTG -DFEAT_XPM_W32 -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 /Fo.\ObjGOLYHTR/ /Ox /GL -DNDEBUG /Zl /MT -DFEAT_OLE -DFEAT_MBYTE_IME -DDYNAMIC_IME -DFEAT_GUI_W32 -DDYNAMIC_ICONV -DDYNAMIC_GETTEXT -DFEAT_TCL -DDYNAMIC_TCL -DDYNAMIC_TCL_DLL=\"tcl83.dll\" -DDYNAMIC_TCL_VER=\"8.3\" -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL=\"python27.dll\" -DFEAT_PYTHON3 -DDYNAMIC_PYTHON3 -DDYNAMIC_PYTHON3_DLL=\"python31.dll\" -DFEAT_PERL -DDYNAMIC_PERL -DDYNAMIC_PERL_DLL=\"perl512.dll\" -DFEAT_RUBY -DDYNAMIC_RUBY -DDYNAMIC_RUBY_VER=191 -DDYNAMIC_RUBY_DLL=\"msvcrt-ruby191.dll\" -DFEAT_BIG /Fd.\ObjGOLYHTR/ /Zi Linking: link /RELEASE /nologo /subsystem:windows /LTCG:STATUS oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib gdi32.lib version.lib winspool.lib comctl32.lib advapi32.lib shell32.lib /machine:i386 /nodefaultlib libcmt.lib oleaut32.lib user32.lib /nodefaultlib:python27.lib /nodefaultlib:python31.lib e:\tcl\lib\tclstub83.lib WSock32.lib e:\xpm\lib\libXpm.lib /PDB:gvim.pdb -debug

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >