Search Results

Search found 3833 results on 154 pages for 'git noob'.

Page 148/154 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • VIM "upgraded" to expandtab and tabstop=8 on Python files

    - by dotancohen
    After reinstalling my OS from Kubuntu 12.10 to Kubuntu 14.04, VIM has changed its behaviour when editing Python files. Though before the reinstall all file types had noexpandtab and tabstop=4 set, now in Python those values are expandtab and tabstop=8, checked also via VIM behaviour and also via asking VIM set foo?. Non-Python files retain the noexpandtab and tabstop=4 behaviour that I prefer. The .vim direcotry and .vimrc were not touched during the reinstall. It can be seen that no files in .vimrc have been touched in months (with the exception of the irrelevant .netrwhist): - bruno():~$ ls -lat ~/.vim total 68 drwxr-xr-x 85 dotancohen dotancohen 12288 Aug 25 13:00 .. drwxr-xr-x 12 dotancohen dotancohen 4096 Aug 21 11:11 . -rw-r--r-- 1 dotancohen dotancohen 268 Aug 21 11:11 .netrwhist drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 plugin drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 doc drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 syntax drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 ftplugin drwxr-xr-x 4 dotancohen dotancohen 4096 Nov 29 2013 autoload drwxrwxr-x 5 dotancohen dotancohen 4096 May 27 2013 after drwxr-xr-x 2 dotancohen dotancohen 4096 Nov 1 2012 spell -rw------- 1 dotancohen dotancohen 138 Aug 14 2012 .directory -rw-rw-r-- 1 dotancohen dotancohen 190 Jul 3 2012 .VimballRecord drwxrwxr-x 2 dotancohen dotancohen 4096 May 12 2012 colors drwxrwxr-x 2 dotancohen dotancohen 4096 Mar 16 2012 mytags drwxrwxr-x 2 dotancohen dotancohen 4096 Feb 14 2012 keymap Though .vimrc has been touched since the reinstall, it was only me testing to see where the problem is. How can I tell what is settingexpandtab and tabstop? Side note: I'm not even sure what I should read in the built-in help for this issue. I started with ":h plugin" but that did not help other than showing me that the following plugins are loaded (possibly relevant): standard-plugin-list Standard plugins pi_getscript.txt Downloading latest version of Vim scripts pi_gzip.txt Reading and writing compressed files pi_netrw.txt Reading and writing files over a network pi_paren.txt Highlight matching parens pi_tar.txt Tar file explorer pi_vimball.txt Create a self-installing Vim script pi_zip.txt Zip archive explorer LOCAL ADDITIONS: local-additions DynamicSigns.txt - Using Signs for different things NrrwRgn.txt A Narrow Region Plugin (similar to Emacs) fugitive.txt A Git wrapper so awesome, it should be illegal indent-object.txt Text objects based on indent levels. taglist.txt Plugin for browsing source code vimwiki.txt A Personal Wiki for Vim

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • installed mongo using brew but stuck at prompt

    - by user50946
    I have installed mongo using brew on my mac. When I give mongo command I see this MongoDB shell version: 2.4.6 connecting to: test but it stays there and never give me command prompt back anyone else noticed something like this I have reinstalled with no luck. The issue is persistent thanks Logs ***** SERVER RESTARTED ***** Fri Oct 18 08:11:48.360 [initandlisten] MongoDB starting : pid=2081 port=27017 dbpath=/usr/local/var/mongodb 64-bit host=Asims-MacBook-Air.local Fri Oct 18 08:11:48.360 [initandlisten] db version v2.4.6 Fri Oct 18 08:11:48.360 [initandlisten] git version: nogitversion Fri Oct 18 08:11:48.360 [initandlisten] build info: Darwin minimountain.local 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 Fri Oct 18 08:11:48.360 [initandlisten] allocator: tcmalloc Fri Oct 18 08:11:48.360 [initandlisten] options: { bind_ip: "127.0.0.1", config: "/usr/local/etc/mongod.conf", dbpath: "/usr/local/var/mongodb", logappend: "true", logpath: "/usr/local/var/log/mongodb/mongo.log" } Fri Oct 18 08:11:48.361 [initandlisten] journal dir=/usr/local/var/mongodb/journal Fri Oct 18 08:11:48.361 [initandlisten] recover : no journal files present, no recovery needed Fri Oct 18 08:11:48.398 [websvr] admin web console waiting for connections on port 28017 Fri Oct 18 08:11:48.398 [initandlisten] waiting for connections on port 27017 Fri Oct 18 08:12:03.279 [signalProcessingThread] got signal 1 (Hangup: 1), will terminate after current cmd ends Fri Oct 18 08:12:03.279 [signalProcessingThread] now exiting Fri Oct 18 08:12:03.279 dbexit: Fri Oct 18 08:12:03.279 [signalProcessingThread] shutdown: going to close listening sockets... Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 9 Fri Oct 18 08:12:03.279 [signalProcessingThread] closing listening socket: 10 Fri Oct 18 08:12:03.280 [signalProcessingThread] closing listening socket: 11 Fri Oct 18 08:12:03.280 [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to flush diaglog... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: going to close sockets... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: waiting for fs preallocator... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: lock for final commit... Fri Oct 18 08:12:03.280 [signalProcessingThread] shutdown: final commit... Fri Oct 18 08:12:03.282 [signalProcessingThread] shutdown: closing all files... Fri Oct 18 08:12:03.282 [signalProcessingThread] closeAllFiles() finished

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • ffmpeg - join / merge on top of each other

    - by AisIceEyes
    I'm trying to join together two videos on top of each other. I already did these two ffmpeg commands ffmpeg -i 2_Out_of_Control.VOB -aspect 16:9 \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1280:720,boxblur=lp=13" \ -c:v libx264 -preset medium \ -c:a copy \ '2(blurred)Out_of_Control.mp4' ffmpeg -i 2_Out_of_Control.VOB \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1080:720" \ -c:v libx264 -preset medium \ -c:a copy \ '2(clear)Out_of_Control.mp4' I'm currently stuck on making the "clear" version on top of the "blurred" version. I'm not sure how to do that. Can anybody help please? Been googling for around 2 days already. Only achieved it by using OpenShot but yeah, would prefer if there is an ffmpeg command to merge the two videos on top of each other. Edit: I want the "clear" video to be at the center at the top of the "blurred" video Edit2: console output would be the same as above: ffmpeg -i 2(blurred)Out_of_Control.mp4 \ -i 2(clear)Out_of_Control.mp4 \ -aspect 16:9 \ -vf <just something that will join the two together: the blurred at the bottom, clear at top that is centered> \ -c:v libx264 -preset medium \ -c:a copy \ '2_Out_of_Control_VOB.mp4' Edit3: here is the output when I used ffmpeg -i 2_Out_of_Control.VOB $ ffmpeg -i 2_Out_of_Control.VOB ffmpeg version git-2013-10-03-c7fe2a3 Copyright (c) 2000-2013 the FFmpeg developers built on Oct 4 2013 05:22:06 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --prefix=/home/username/ffmpeg_build --extra-cflags=-I/home/username/ffmpeg_build/include --extra-ldflags=-L/home/username/ffmpeg_build/lib --bindir=/home/username/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 46.100 / 52. 46.100 libavcodec 55. 34.100 / 55. 34.100 libavformat 55. 19.100 / 55. 19.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 88.101 / 3. 88.101 libswscale 2. 5.100 / 2. 5.100 libswresample 0. 17.103 / 0. 17.103 libpostproc 52. 3.100 / 52. 3.100 Input #0, mpeg, from '2_Out_of_Control.VOB': Duration: 00:05:00.01, start: 0.500000, bitrate: 4574 kb/s Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, smpte170m), 720x480 [SAR 8:9 DAR 4:3], max. 9334 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0:1[0x80]: Audio: ac3, 48000 Hz, stereo, fltp, 384 kb/s At least one output file must be specified $

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • How to build the Darling projrct on Ubuntu 13.10?

    - by mirror27
    The Darling project is an open source Darwin/OS X emulation layer for Linux. I downloaded the source code with git and tried to build it with cmake but it failed. The document says I need these packages: clang 3.1+ GCC 4.6+ (yes, you still need GCC for header files) libkqueue libbsd gnustep-base ("Foundation") gnustep-gui ("Cocoa") gnustep-corebase ("CoreFoundation") libobjc2 libudev openssl libasound libav libgc but I could not find them on apt or in software center. Also cmake showed this result: No build type selected, default to Debug This is a 64-bit build Building ObjC ABI 2 You have called ADD_LIBRARY for library Carbon without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library AppKit without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library auto without any source files. This typically indicates a problem with your CMakeLists.txt file CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: LIBGNUSTEPCOREBASE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBKQUEUE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBOBJC2_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin Configuring incomplete, errors occurred! How can I build the Darling project?

    Read the article

  • Converting Creole to HTML, PDF, DOCX, ..

    - by Marko Apfel
    Challenge We documented a project on Github with the Wiki there. For most articles we used Creole as markup language. Now we have to deliver a lot of the content to our client in an usual format like PDF or DOCX. So we need a automatism to extract all relevant content, merge it together and convert the stuff to a new format. Problem One of the most popular toolsets to convert between several formats is Pandoc. But unfortunally Pandoc does not support Creole (see the converting matrix). Approach So we need an intermediate step: Converting from Creole to a supported Pandoc format. Creolo/c is a Creole to Html converter and does exactly what we need. After converting our Creole content to Html we could use Pandoc for all the subsequent tasks. Solution Getting the Creole stuff First at all we need the Creole content on our locale machines. This is easy. Because the Github Wiki themselves is a Git repository we could clone it to our machine. In the working copy we see now all the files and the suffix gives us the hint for the markup language. Converting and Merging Creole content to Html Because we would like all content from several Creole files in one HTML file, we have to convert and merge all the input files to one output file. Creole/c has an option (-b) to generate only the Html-stuff below a Html <Body>-tag. And this is hook for us to start. We have to create manually the additional preluding Html-tags (<html>, <head>, ..), then we merge all needed Creole content to our output file and last we add the closing tags. This could be done straightforward with a little bit old DOS magic: REM === Generate the intro tags === ECHO ^<html^> > %TMP%\output.html ECHO ^<head^> >> %TMP%\output.html ECHO ^<meta name="generator" content="creole/c"^> >> %TMP%\output.html ECHO ^</head^> >> %TMP%\output.html ECHO ^<body^> >> %TMP%\output.html REM === Mix in all interesting Creole stuff with creole/c === .\Creole-C\bin\creole.exe -b .\..\datamodel+overview.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdCaptureMode.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdDamageReducingActivity.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+lookup+IncidentDamageCodes.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+Attachments.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+TrafficLights.creole >> %TMP%\output.html REM === Generate the outro tags === ECHO ^</body^> >> %TMP%\output.html ECHO ^</html^> >> %TMP%\output.html REM === Convert the Html file to Docx with Pandoc === .\Pandoc\bin\pandoc.exe -o .\Database-Schema.docx %TMP%\output.html Some explanation for this The first ECHO call creates the file. Therefore the beginning <html> tag is send via > to a temporary working file. All following calls add content to the existing file via >>. The tag-characters < and > must be escaped. This is done by the caret sign (^). We use a file in the default temporary folder (%TMP%) to avoid writing in our current folders. (better for continuous integration) Both toolsets (Creole/c and Pandoc) are copied to a versioned tools folder in the Wiki. This is committable and no problem after pushing – Github does not do anything with it. In this folder is also the batch (Export-Docx.bat) for all the steps. Pandoc recognizes the conversion by the suffixes of the file names. So it is enough to specify only the input and output files.

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Ports do not open after rules appended in iptables

    - by user2699451
    I have a server that I am trying to setup for OpenVPN. I have followed all the steps, but I see that when I try to connect to it in Windows, it doesn't allow me, it just hangs on connecting, so I did a nmap scan and I see that port 1194 is not open so naturally I append the rule to open 1194 with: iptables -A INPUT -i eth0 -p tcp --dport 1194 -j ACCEPT followed by service iptables save and service iptables restart which all executed successfully. Then I try again, but it doesn't work and another nmap scan says that port 1194 is closed. Here is the iptables configuration: # Generated by iptables-save v1.4.7 on Thu Oct 31 09:47:38 2013 *nat :PREROUTING ACCEPT [27410:3091993] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [5042:376160] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -j SNAT --to-source 41.185.26.238 -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE COMMIT # Completed on Thu Oct 31 09:47:38 2013 # Generated by iptables-save v1.4.7 on Thu Oct 31 09:47:38 2013 *filter :INPUT ACCEPT [23571:2869068] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [27558:3656524] :vl - [0:0] -A INPUT -p tcp -m tcp --dport 5252 -m comment --comment "SSH Secure" -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -m state --state NEW,RELATED,ESTABLISHED -$ -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "SSH" -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -m comment --comment "HTTP" -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -m comment --comment "HTTPS" -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -m comment --comment "HTTP Encrypted" -j ACCEP$ -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A INPUT -p udp -m udp --dport 1194 -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 10.8.0.0/24 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -p icmp -m icmp --icmp-type 0 -m state --state RELATED,ESTABLISHED -j A$ COMMIT # Completed on Thu Oct 31 09:47:38 2013 and my nmap scan from: localhost: nmap localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 09:53 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000011s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 443/tcp open https 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds remote pc: nmap [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 09:53 SAST Nmap scan report for rla04-nix1.wadns.net (41.185.26.238) Host is up (0.025s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 15.70 seconds So, I do not know what is causing this, any assistance will be appreciated! UPDATE AFTER FIRST ANSWER::: [root@RLA04-NIX1 ~]# iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT [root@RLA04-NIX1 ~]# iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT [root@RLA04-NIX1 ~]# iptables -A FORWARD -j REJECT [root@RLA04-NIX1 ~]# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE [root@RLA04-NIX1 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] [root@RLA04-NIX1 ~]# service iptables restart iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter nat [ OK ] iptables: Unloading modules: [ OK ] iptables: Applying firewall rules: [ OK ] [root@RLA04-NIX1 ~]# lsof -i :1194 -bash: lsof: command not found iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5252 /* SSH Secure */ ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 state NEW,RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 /* SSH */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 /* HTTP */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 /* HTTPS */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 /* HTTP Encrypted */ ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1723 ACCEPT 47 -- 0.0.0.0/0 0.0.0.0/0 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 state RELATED,ESTABLISHED Chain vl (0 references) target prot opt source destination [root@RLA04-NIX1 ~]# nmap localhostt Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 11:13 SAST remote pc nmap [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 11:11 SAST Nmap scan report for rla04-nix1.wadns.net (41.185.26.238) Host is up (0.020s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 4.18 seconds localhost nmap localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-31 11:13 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000011s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 443/tcp open https 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds UPDATE AFTER SCANNING UDP PORTS Sorry, I am noob, I am still learning, but here is the output for: nmap -sU [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-31 11:33 SAST Nmap scan report for [server address] ([server ip]) Host is up (0.021s latency). Not shown: 997 open|filtered ports PORT STATE SERVICE 53/udp closed domain 123/udp closed ntp 33459/udp closed unknown Nmap done: 1 IP address (1 host up) scanned in 8.57 seconds btw, no changes have been made since post started (except for iptables changes)

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • OpenVPN and PPTP on XEN VPS

    - by amiv
    I have Debian based system (Ubuntu 11.10) on XEN VPS. I've installed OpenVPN and works great. I need to install PPTP too, so did it and clients can connect, but they have no internet on client side. If I connect to VPN over PPTP I can ping and access to only my VPS by its IP, but ony that. There's no "internet" on client side. It looks it's not DNS problems (I'm using 8.8.8.8) because I can't ping known IPs. I bet the solution is simple, but don't have any idea. Any guess? /etc/pptpd.conf option /etc/ppp/pptpd-options logwtmp localip 46.38.xx.xx remoteip 10.1.0.1-10 /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 8.8.8.8 ms-dns 8.8.4.4 proxyarp nodefaultroute lock nobsdcomp /etc/ppp/ip-up [...] ifconfig ppp0 mtu 1400 /etc/sysctl.conf [...] net.ipv4.ip_forward=1 Command which I run: iptables -t nat -A POSTROUTING -j SNAT --to-source 46.38.xx.xx (IP of my VPS) The client can connect, first one gets IP 10.1.0.1 and DNS from Google. I bet it's iptables problem, am I right? I'm iptables noob and I don't have idea what's wrong. And here's the ifconfig and route command before client connect via PPTP: root@vps3780:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default xx.xx.tel.ru 0.0.0.0 UG 100 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 46.38.xx.0 * 255.255.255.0 U 0 0 0 eth0 root@vps3780:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3e:56:xx:xx inet addr:46.38.xx.xx Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::216:xx:xx:dfb6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22671 errors:0 dropped:81 overruns:0 frame:0 TX packets:2266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1813358 (1.8 MB) TX bytes:667626 (667.6 KB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:10778 (10.7 KB) TX bytes:10778 (10.7 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:602 errors:0 dropped:0 overruns:0 frame:0 TX packets:612 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:90850 (90.8 KB) TX bytes:418904 (418.9 KB) And here's the ifconfig and route command after client connect via PPTP: root@vps3780:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default xx.xx.tel.ru 0.0.0.0 UG 100 0 0 eth0 10.1.0.1 * 255.255.255.255 UH 0 0 0 ppp0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 46.38.xx.0 * 255.255.255.0 U 0 0 0 eth0 root@vps3780:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3e:56:xx:xx inet addr:46.38.xx.xx Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::216:xx:xx:dfb6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22989 errors:0 dropped:82 overruns:0 frame:0 TX packets:2352 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1841310 (1.8 MB) TX bytes:678456 (678.4 KB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12102 (12.1 KB) TX bytes:12102 (12.1 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:46.38.xx.xx P-t-P:10.1.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:66 errors:0 dropped:0 overruns:0 frame:0 TX packets:15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:10028 (10.0 KB) TX bytes:660 (660.0 B) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:602 errors:0 dropped:0 overruns:0 frame:0 TX packets:612 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:90850 (90.8 KB) TX bytes:418904 (418.9 KB) And ugly iptables --list output: root@vps3780:~# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- 10.1.0.0/24 anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.1.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable And ugly iptables -t nat -L output: root@vps3780:~# iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- 10.1.0.0/24 anywhere SNAT all -- 10.1.0.0/24 anywhere to:46.38.xx.xx SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx SNAT all -- 10.1.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- anywhere anywhere SNAT all -- anywhere anywhere to:46.38.xx.xx SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- anywhere anywhere MASQUERADE all -- 10.1.0.0/24 anywhere MASQUERADE all -- anywhere anywhere MASQUERADE all -- 10.1.0.0/24 anywhere As I said - OpenVPN works very good. 10.8.0.0/24 for OpenVPN (on tun0). PPTP won't work. 10.1.0.0/24 for PPTP (on ppp0). Clients can connect, but they haven't "internet". Any suggestions will be appreciated. Second whole day fighting with no results. EDIT: iptables -t filter -F - it resolved my problem :-)

    Read the article

  • Ubuntu cannot access internet, LAN is fine

    - by Kevin Southworth
    I have an Ubuntu 8.04 LTS server that is directly connected to our Comcast Business Gateway modem and I have configured it with 1 of our 5 allotted Static IPs. My other machines on our LAN can connect to this server (via ssh, web, ping, etc.) but I cannot access this server from outside our network, and this machine cannot get out to the internet either (ping google.com fails with unknown host). Here is my /etc/networking/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 173.162.54.19 netmask 255.255.255.248 broadcast 173.162.54.23 gateway 173.162.54.22 and my /etc/resolv.conf: nameserver 68.87.77.130 nameserver 68.87.72.130 output from sudo route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 173.162.54.16 0.0.0.0 255.255.255.248 U 0 0 0 eth0 0.0.0.0 173.162.54.22 0.0.0.0 UG 100 0 0 eth0 I have a Windows 2008 machine with an almost identical Static IP, static DNS setup and it works correctly, can access it within the LAN and also from public internet, the Windows machine and the Ubuntu machine are both directly connected to the Comcast Business Gateway. I have tried rebooting Ubuntu, rebooting my Comcast modem, but nothing seems to make it work. I'm an Ubuntu noob, is there some other config I need to apply to make this work? UPDATE: Yes I am able to ping my default gateway 173.162.54.22 output of iptables --list -n: Chain INPUT (policy DROP) target prot opt source destination ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination ufw-before-forward all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-forward all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ufw-before-output all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-output all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-forward (1 references) target prot opt source destination LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK FORWARD]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-input (1 references) target prot opt source destination RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:137 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:138 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 RETURN udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK INPUT]: ' RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-after-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-forward (1 references) target prot opt source destination ufw-user-forward all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED DROP all -- 0.0.0.0/0 0.0.0.0/0 ctstate INVALID ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:67 dpt:68 ufw-not-local all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 224.0.0.0/4 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 224.0.0.0/4 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-output (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW,RELATED,ESTABLISHED ufw-user-output all -- 0.0.0.0/0 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-not-local (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type MULTICAST RETURN all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type BROADCAST LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 3/min burst 10 LOG flags 0 level 4 prefix `[UFW BLOCK NOT-TO-ME]: ' DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-forward (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-output (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Windows Server 2008 R2 + IIS 7.5 + ASP.NET 4.0 = HTTP Error 500.0

    - by Dave
    I am having an impossible time getting asp.net 4.0 to work in any fashion at all. In fact, I completely wiped my server, reinstalled with Server 2008 R2 Standard (running on a VMWare ESXi box, not that it should matter), and cannot even get a test .aspx page to work. Here is exactly what I did: Installed 2008 R2 Standard Activated windows and enabled Remote Desktop Installed the Web Server Role with the necessary role services(common http, asp.net, logging, tracing, management service and FTP) Enabled the management service Installed .Net Framework 4.0 via web executable Added FTP publishing to the default web site Switched default web site application pool to asp.net 4.0 (integrated) Added a 'test.aspx' file to the inetpub\wwwroot folder (contents below) Opened a browser to http://localhost/test.aspx and received a 500.0 error (also below) What am I missing? I haven't touched IIS in a while (3+ years), so it could be something stupid/trvial. Please point it out, call me a noob; my ego can take it. Thanks, Dave test.aspx <% @Page language="C# %> <html> <head> <title>Test.aspx</title> </head> <body> <asp:label runat="server" text="This is an asp.net 4.0 label" /> </body> </html> Error page: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler PageHandlerFactory-Integrated-4.0 Error Code 0x80070002 Requested URL http://localhost:80/test.aspx Physical Path C:\inetpub\wwwroot\test.aspx Logon Method Not yet determined Logon User Not yet determined Trace: And in my trace file I get: 96. view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 97. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) The application error log shows: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event>

    Read the article

  • Comments show up in database, but only show up on my index page after a refresh.

    - by Truong
    Hi, I have AJAX, PHP, jquery, and mySQL in this very simple website I'm trying to make. All there is is a text area that sends data to the database and uses ajax\jquery to display that data onto the index page. For some reason though, I press submit and the data goes to the database, but I have to refresh the page myself to see that data on the page. I'm assuming that the problem has to do with my AJAX JQuery or even some mistake in the index. Also, when I type the text into the text area and press submit, the text remains in the textarea until I refresh the page. Haha, sorry if this is such a noob question.. I'm trying to learn. Thanks so much Here is the AJAX: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { $(".submit").click(function() { var comment = $("#comment").val(); var post_id = $("#post").val(); var dataString = '&comment=' + comment if(comment=='') { alert('Fill something in please!'); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="noworries.jpg" /> '); $.ajax({ type: "POST", url: "commentajax.php", data: dataString, cache: false, success: function(html){ $("ol#update").append(html); $("ol#update li:last").fadeIn("slow"); $("#flash").hide(); } }); }return false; }); }); </script> Here is the index\form area: <body> <div id="container"><img src="banner.jpg" width="890" height="150" alt="title" /></div> <id="update" class="timeline"> <div id="flash"></div> <div id="container"> <form action="#" method="post"> <textarea name="comment" id="comment" cols="35" rows="4"></textarea><br /> <input name="submit" type="submit" class="submit" id="submit" value=" Submit Comment " /><br /> </form> </div> <id="update" class="timeline"> <?php include('config.php'); //$post_id value comes from the POSTS table $prefix="I'm happy when"; $sql=mysql_query("select * from comments order by com_id desc"); while($row=mysql_fetch_array($sql)) { $comment=$row['com_dis']; ?> <!--Displaying comments--> <div id="container"> <class="box"> <?php echo "$prefix $comment"; ?> </div> <?php } ?> Here is my commentajax.php <?php include('config.php'); if($_POST) { $comment=$_POST['comment']; $comment=mysql_real_escape_string($comment); mysql_query("INSERT INTO comments(com_id,com_dis) VALUES ('NULL', '$comment')"); } ?> <li class="box"><br /> <?php echo $comment; ?> </li> I'm sorry for so much code but I just started learning this four days ago and this is probably one of the last bugs until the website is functional.

    Read the article

  • Installing Rails, MySQL, etc. everything goes wrong

    - by Rits
    I've been struggling with this for a few hours. Everything just stopped working and I can't get it to work anymore. I'm a noob at Ruby, Ruby on Rails and the Terminal in general. This is really frustrating me so I just try to describe my problem as detailed as possible hoping someone can give me a solution. I'm on Mac OS X Snow Leopard. I couldn't get Rails working at all just now: Could not find gem 'rails' headaches But after some tries of reinstalling it, it suddenly worked again. But now I just can't get MySQL to work, and it sometimes even breaks the Rails installation again. This is what I do: sudo gem uninstall rails sudo gem uninstall mysql sudo gem uninstall mysql2 After these commands, I check the installed gems with gem list. No MySQL gem is listed anymore, but I can still see rails (2.3.5, 2.2.2, 1.2.6) . Is this normal? Does this mean I have 3 Rails installations? It doesn't make sense to me. Anyway, then I do this: sudo gem clean Which fails completely. I get a bunch of errors like this: Attempting to uninstall fcgi-0.8.7 Unable to uninstall fcgi-0.8.7: Gem::InstallError: cannot uninstall, check gem list -d fcgi It doesn't uninstall anything. At this point, I try to install everything again. I start with: sudo gem install rails Which succeeds (I think): Successfully installed rails-3.0.3 Successfully installed builder-2.1.2 2 gems installed Installing ri documentation for rails-3.0.3... File not found: lib Then, I update RubyGems: sudo gem update --system sudo gem install rubygems-update sudo update_rubygems Then it says I have 1.3.7 installed, so it succeeded, I think. So now I proceed with installing MySQL. I already got MySQL 5.5.8 installed on my machine. I did some research about installing MySQL on Snow Leopard, and it seems I have to use this command: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config I get a bunch of errors like this: No definition for time_set_neg No definition for time_set_second_part No definition for time_equal No definition for error_errno At this point, I assume I got both Rails and the MySQL gem installed, so I try to start a new project. rails new user_group -d mysql It works! Rails is installed correctly. Now, I try generating a model. cd user_group rails generate model User It fails with this error: Could not find gem 'mysql2 (= 0, runtime)' in any of the gem sources listed in your Gemfile. Try running bundle install. So I try running bundle install. It installs a lot of gems. Then I try to generate my model again. I get this error: Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle: dlopen(/Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle, 9): Library not loaded: libmysqlclient.16.dylib (LoadError) Referenced from: /Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle Reason: image not found - /Library/Ruby/Gems/1.8/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle This is as far as I can get. What should I do? And why should this be so hard...

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • Increase RGB components every Hour (r), Minute (g), Second (b) for digital clock

    - by TJ Fertterer
    So I am taking my first javascript class (total noob) and one of the assignments is to modify a digital clock by assigning the color red to hours, green minutes, blue to seconds, then increase the respective color component when it changes. I have successfully assigned a decimal color value (ex. "#850000" to each element (hours, minutes, seconds), but my brain is fried trying to figure out how to increase the brightness when hours, minutes, seconds change, i.e. red goes up to "#870000" changing from 1:00:00 pm to 2:00:00 pm. I've searched everywhere with no help on how to successfully do this. Here is what I have so far and any help on this would be greatly appreciated. TJ <script type="text/javascript"> <!-- function updateClock() { var currentTime = new Date(); var currentHours = currentTime.getHours(); var currentMinutes = currentTime.getMinutes(); var currentSeconds = currentTime.getSeconds(); // Pad the minutes with leading zeros, if required currentMinutes = ( currentMinutes < 10 ? "0" : "" ) + currentMinutes; // Pad the seconds with leading zeros, if required currentSeconds = ( currentSeconds < 10 ? "0" : "" ) + currentSeconds; // Choose either "AM" or "PM" as appropriate var timeOfDay = ( currentHours < 12 ) ? "AM" : "PM"; // Convert the hours component to 12-hour format currentHours = ( currentHours > 12 ) ? currentHours - 12 : currentHours; // Convert an hours component if "0" to "12" currentHours = ( currentHours == 0 ) ? 12 : currentHours; // Get hold of the html elements by their ids var hoursElement = document.getElementById("hours"); document.getElementById("hours").style.color = "#850000"; var minutesElement = document.getElementById("minutes"); document.getElementById("minutes").style.color = "#008500"; var secondsElement = document.getElementById("seconds"); document.getElementById("seconds").style.color = "#000085"; var am_pmElement = document.getElementById("am_pm"); // Put the clock sections text into the elements' innerHTML hoursElement.innerHTML = currentHours; minutesElement.innerHTML = currentMinutes; secondsElement.innerHTML = currentSeconds; am_pmElement.innerHTML = timeOfDay; } // --> </script> </head> <body onload="updateClock(); setInterval( 'updateClock()', 1000 )"> <h1 align="center">The JavaScript digital clock</h1> <h2 align="center">Thomas Fertterer - Lab 2</h2> <div id='clock' style="text-align: center"> <span id="hours"></span>: <span id='minutes'></span>: <span id='seconds'></span> <span id='am_pm'></span> </div> </body> </html>

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154  | Next Page >