Search Results

Search found 11288 results on 452 pages for 'git status'.

Page 136/452 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • System locking up with suspicious messages about hard disk

    - by Chris Conway
    My system has started behaving strangely, intermittently locking up. I see messages like the following in syslog: Nov 18 22:22:00 claypool kernel: [ 3428.078156] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.078163] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.078167] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.078182] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.078184] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.078188] ata3.00: status: { DRDY } Nov 18 22:22:00 claypool kernel: [ 3428.080887] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.080890] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.080893] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.080905] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.080906] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.080910] ata3.00: status: { DRDY } And then this: Nov 18 23:13:56 claypool kernel: [ 6544.000798] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 18 23:13:56 claypool kernel: [ 6544.000804] ata1.00: failed command: FLUSH CACHE EXT Nov 18 23:13:56 claypool kernel: [ 6544.000814] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 23:13:56 claypool kernel: [ 6544.000815] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Nov 18 23:13:56 claypool kernel: [ 6544.000819] ata1.00: status: { DRDY } Nov 18 23:13:56 claypool kernel: [ 6544.000825] ata1: hard resetting link Nov 18 23:14:01 claypool kernel: [ 6549.360324] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:06 claypool kernel: [ 6554.008091] ata1: COMRESET failed (errno=-16) Nov 18 23:14:06 claypool kernel: [ 6554.008103] ata1: hard resetting link Nov 18 23:14:11 claypool kernel: [ 6559.372246] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:16 claypool kernel: [ 6564.020228] ata1: COMRESET failed (errno=-16) Nov 18 23:14:16 claypool kernel: [ 6564.020235] ata1: hard resetting link Nov 18 23:14:21 claypool kernel: [ 6569.380109] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:31 claypool kernel: [ 6579.460243] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 18 23:14:31 claypool kernel: [ 6579.486595] ata1.00: configured for UDMA/133 Nov 18 23:14:31 claypool kernel: [ 6579.486601] ata1.00: retrying FLUSH 0xea Emask 0x4 Nov 18 23:14:31 claypool kernel: [ 6579.486939] ata1.00: device reported invalid CHS sector 0 Nov 18 23:14:31 claypool kernel: [ 6579.486952] ata1: EH complete Nov 18 23:17:01 claypool CRON[3910]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Nov 18 23:17:01 claypool CRON[3908]: (CRON) error (grandchild #3910 failed with exit status 1) Nov 18 23:17:01 claypool postfix/sendmail[3925]: fatal: open /etc/postfix/main.cf: No such file or directory Nov 18 23:17:01 claypool CRON[3908]: (root) MAIL (mailed 1 byte of output; but got status 0x004b, #012) Nov 18 23:39:01 claypool CRON[4200]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) There are no messages marked after 23:39. When I next tried to use the machine, it would not return from the screensaver (blank screen), nor switch to another terminal, and I had to hard reboot it. [UPDATE] The output of smartctl is here. I had trouble getting this, because / is being mounted read-only (?!), which prevents most applications from running. Also, it may not be related, but I have the following worrying messages in dmesg: [ 10.084596] k8temp 0000:00:18.3: Temperature readouts might be wrong - check erratum #141 [ 10.098477] i2c i2c-0: nForce2 SMBus adapter at 0x600 [ 10.098483] ACPI: resource nForce2_smbus [io 0x0700-0x073f] conflicts with ACPI region SM00 [??? 0x00000700-0x0000073f flags 0x30] [ 10.098486] ACPI: This conflict may cause random problems and system instability [ 10.098487] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 10.098509] i2c i2c-1: nForce2 SMBus adapter at 0x700 [ 10.112570] Linux agpgart interface v0.103 [ 10.155329] atk: Resources not safely usable due to acpi_enforce_resources kernel parameter [ 10.161506] it87: Found IT8712F chip at 0x290, revision 8 [ 10.161517] it87: VID is disabled (pins used for GPIO) [ 10.161527] it87: in3 is VCC (+5V) [ 10.161528] it87: in7 is VCCH (+5V Stand-By) [ 10.161560] ACPI: resource it87 [io 0x0295-0x0296] conflicts with ACPI region ECRE [??? 0x00000290-0x000002af flags 0x45] [ 10.161562] ACPI: This conflict may cause random problems and system instability [ 10.161564] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [UPDATE 2] I swapped in a new SATA cable, per Phil's suggestion. The current output of smartctl is here, if it helps. [UPDATE 3] I don't think the cable fixed it. The system hasn't locked up yet, but my media player crashed a few minutes ago and I have the following in the syslog: Nov 20 16:07:17 claypool kernel: [ 2294.400033] ata1: link is slow to respond, please be patient (ready=0) Nov 20 16:07:47 claypool kernel: [ 2324.084581] ata1: COMRESET failed (errno=-16) Nov 20 16:07:47 claypool kernel: [ 2324.084588] ata1: limiting SATA link speed to 1.5 Gbps Nov 20 16:07:47 claypool kernel: [ 2324.084592] ata1: hard resetting link I get the following response from smartctl: $ sudo smartctl -a /dev/sda [sudo] password for chris: sudo: Can't open /var/lib/sudo/chris/0: Read-only file system smartctl 5.40 2010-03-16 r3077 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: /0:0:0:0 Version: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

    Read the article

  • ASP.NET Web API - Screencast series Part 3: Delete and Update

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we'll start to modify data on the server using DELETE and POST methods. So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server. That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works: public Comment DeleteComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); repository.Delete(id); return comment; } If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too. Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb: $(function() { $("a.delete").live('click', function () { var id = $(this).data('comment-id'); $.ajax({ url: "/api/comments/" + id, type: 'DELETE', cache: false, statusCode: { 200: function(data) { viewModel.comments.remove( function(comment) { return comment.ID == data.ID; } ); } } }); return false; }); }); So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard. But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted. Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences. public HttpResponseMessage<Comment> PostComment(Comment comment) { comment = repository.Add(comment); var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); return response; } First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information. And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content. Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • problem occur during installation of moses scripts

    - by lenny99
    we got error when compile moses-script. process of it as follows: minakshi@minakshi-Vostro-3500:~/Desktop/monu/moses/scripts$ make release # Compile the parts make all make[1]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts' # Building memscore may fail e.g. if boost is not available. # We ignore this because traditional scoring will still work and memscore isn't used by default. cd training/memscore ; \ ./configure && make \ || ( echo "WARNING: Building memscore failed."; \ echo 'training/memscore/memscore' >> ../../release-exclude ) checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking whether the C++ compiler works... yes checking for C++ compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for gcc... gcc checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for boostlib >= 1.31.0... yes checking for cos in -lm... yes checking for gzopen in -lz... yes checking for cblas_dgemm in -lgslcblas... no checking for gsl_blas_dgemm in -lgsl... no checking how to run the C++ preprocessor... g++ -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking n_gram.h usability... no checking n_gram.h presence... no checking for n_gram.h... no checking for size_t... yes checking for ptrdiff_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make all-am make[3]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[3]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' touch release-exclude # No files excluded by default pwd=`pwd`; \ for subdir in cmert-0.5 phrase-extract symal mbr lexical-reordering; do \ make -C training/$subdir || exit 1; \ echo "### Compiler $subdir"; \ cd $pwd; \ done make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' ### Compiler cmert-0.5 make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' ### Compiler phrase-extract make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' ### Compiler symal make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' ### Compiler mbr make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' ### Compiler lexical-reordering ## All files that need compilation were compiled make[1]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts' /bin/sh: ./check-dependencies.pl: not found make: *** [release] Error 127 We don't know why this error occurs? check-dependencies.pl file existed in scripts folder ...

    Read the article

  • Where is android.os.SystemProperties

    - by Travis
    I'm looking at the Android Camera code and when I try importing android.os.SystemProperties It cannot be found. here is the file I'm looking at. http://android.git.kernel.org/?p=platform/packages/apps/Camera.git;a=blob;f=src/com/android/camera/VideoCamera.java;h=8effd3c7e2841eb1ccf0cfce52ca71085642d113;hb=refs/heads/eclair I created a new 2.1 project and tried importing this namespace again, but It still cannot be found. I checked developer.android.com and SystemProperties was not listed Did i miss something?

    Read the article

  • installing by_star gem?

    - by keruilin
    Can someone shed light on how to setup by_star gem: https://rubygems.org/gems/by_star? I ran ruby script/plugin install git://github.com/radar/by_star.git. However, when I went to call a by_star method in one of my models from console, I got an undefined method error. Do I need include or require statement? Could I be missing dependent gems? Ruby version issue?

    Read the article

  • background-fu pluggin unable to run ./script/generate background

    - by Viola
    I'm following this rdoc: http://rdoc.info/projects/ncr/background-fu and can't run ./script/generate background after installing background-fu as a Rails plugin: ./script/plugin install git://github.com/ncr/background-fu.git I'm getting following error: Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- job (MissingSourceFile) Am I missing something?

    Read the article

  • Why am i getting these errors from GitHub?

    - by acidzombie24
    I followed these instruction and could not connect to github for the life of me. >plink -ssh github.com FATAL ERROR: Disconnected: No supported authentication methods available plink -ssh [email protected] You've successfully authenticated, but GitHub does not with tortoisegit git.exe push "origin" master ERROR: Permission to name/MyEmptyRepoOnGitHubHere denied to name. fatal: The remote end hung up unexpectedly Whats going on? NOTE: I followed the instructions carefully. It was a lot worse before i followed them.

    Read the article

  • Rails OpenID Authentication Plugin No Longer Installs Rake Tasks?

    - by Rich Apodaca
    I'm following the Railscasts tutorial on using OpenID with AuthLogic. This command: $ script/plugin install git://github.com/rails/open_id_authentication.git installs the plugin, but I don't see any OpenID Rake tasks (rake -T). In particular, I can no longer run the task: $ rake open_id_authentication:db:create With previous applications, the Rake tasks were installed without a problem, so what's changed with the plugin? Which version of the plugin do I need to get the behavior I'm looking for? Using Rails 2.3.5.

    Read the article

  • NoMethodError: undefined method `has_attached_file'

    - by mirza
    Paperclip produces this error, after checking out the plugin's rails3 branch. My Gemfile has following line: gem 'paperclip', :git => 'http://github.com/thoughtbot/paperclip.git', :branch => 'rails3' And the error message is: NoMethodError: undefined method `has_attached_file' for #<Class:0x2a50530>

    Read the article

  • commons-logging-1.1.jar; cannot read zip file entry

    - by user1226162
    I have imported a GWT project from GIT , but when i run maven Install it says .m2\repository\commons-logging\commons-logging\1.1\commons-logging-1.1.jar; cannot read zip file entry and if i simply run my application , i get this \git\my-Search-Engine\qsse\war}: java.lang.NoClassDefFoundError: com/google/inject/servlet/GuiceServletContextListener I tried to find out the way , one solution i found was to move the guice-servlet-3.0 from build path to \qsse\war\webinf\lib but if i do that i start gettin the exception ava.lang.NoClassDefFoundError: com/google/inject/Injector any idea how can i resolve this

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >