Search Results

Search found 69390 results on 2776 pages for 'team work'.

Page 66/2776 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Can't get port forwarding to work on Ubuntu

    - by Znarkus
    I'm using my home server as NAT/router, which works well. But now I'm trying to forward port 3478, which I can't get to work. eth0 = public interface eth1 = private network $ cat /proc/sys/net/ipv4/conf/eth0/forwarding 1 $ cat /proc/sys/net/ipv4/conf/eth1/forwarding 1 Then to forward port 3478 to 10.0.0.7, I read somewhere that I should run iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 3478 -j DNAT --to-destination 10.0.0.7:3478 iptables -A FORWARD -p tcp -d 10.0.0.7 --dport 3478 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I also ran ufw allow 3478 But testing port 3478 with http://www.canyouseeme.org/ doesn't work. Any idea what I have done wrong?

    Read the article

  • My raspberry pi server hostname doesn't work?

    - by xSpartanCx
    The people over on the rPi forums don't have any answers for me... I've got a raspberry pi running raspbian server edition. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my apache2 website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work.

    Read the article

  • My server hostname doesn't work? [on hold]

    - by xSpartanCx
    I've got a raspberry pi running raspbian server edition. It's a modified debian that runs well on the rPi. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work and I can't ssh (or find it anywhere on the network) using the hostname.

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • How do you QA and release software quickly (some call it agile) with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months (triple dev size!!!) Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environement and QA everything new / old. By our normal release time everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we jsut made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hrs behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • How do you QA and release software quickly with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months. Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environment and QA everything. By our normal release time, everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we just made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hours behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • When am I ready for contracting work?

    - by Kirk Broadhurst
    What skills are required to work on a temp / contracting basis, and how does a developer know when they are ready to work in these circumstances? I have some colleagues who are suggesting contracting work is 'the way to go'; the pay is significantly better. It appears that when a permanent position pays (X times $10k) per year, the corresponding contract position pays almost $X per hour - which is close to twice as much. I look forward to doing this type of work as an experienced expert, but worry that by doing so right now I'd be turning away learning and development opportunities. My assumptions about contract work are the following: less / zero money invested on training and development less concern for job satisfaction and learning ("they won't be here in 12 months") possibly less concern about the overall quality of the project ("we won't be here in 12 months") There are similar questions on stack overflow at the moment but what I've really looking for is: What does the developer give up by moving to contract work? Is it preferable to have structured learning in a permanent position rather than seat-of-the-pants learning in a contract position? What skill leve should the contractor have before making the move? Is there still the same kind of growth in contracting that there is in permanent positions? Rightly or wrongly, I see this as a choice between $ (contracting) and learning / development (permanent). Is this fair?

    Read the article

  • How to find Part Time development/IT work?

    - by Jonathan
    I've been working in the IT field now for 10 years. Originally trained as an Engineer, started out with C++ and have been a .Net specialist since beta. Currently seconded to a major city and working in the finance industry as freelance, I really feel like i've hit the glass ceiling. Have been contracting now for 5 years as the company politics and frustration of not being promoted and poor pay rises for excellent work but during the last decade of corporate cost cutting took its toll on my morale. Freelance made all the difference and i've had a very decorated career for good clients. What any Engineering student could ever dream of when starting out. The problem is, it doesn't particularly make me happy. It's good work, and i enjoy the problem solving aspects of it and having something to do each day. However there is always a large overhead of non-technical work and dealing with poor managers etc. I guess the Engineering was always a bit of a mistake i made the best out of, and now having 10 years behind a computer hasn't done wonders for my health or eye sight. In a nutshell i am in the process of retraining as a therapist and would like to open my own clinic. However, never having done this before, the fast pace IT skills outdate and the fact that all my experience and skills are non transferrable, i am a little worried. Any ideas how i can find part time IT work as i build up my business? (it's incredibly hard to find freelancing work that doesn't require long hours and overtime). Or other ideas to make the transition easier, and perhaps backout if it financially doesn't work/or i have enough marketing skills? I'd be interested to hear from people who have made a similar transition, successfully or unsuccessfully. Many thanks

    Read the article

  • IKImageView, Buttons work intermittently in 10.5 but ok in 10.6

    - by markhunte
    Hi all, In IB,I have connected some buttons to a IKImageView. Using its received Action 'ZoomIn:', 'ZoomOut:','ZoomImageToActualSize:' When I build for 10.6 I have no issues these work as expected. But in 10.5 (ppc), they sometimes work as expected and sometimes do not. I have even tried it programatically. But I get the same results. It is intermittent with each build after I have made some small change in IB that I need. But not changes that should affect how the view acts. The only buttons that seem to work through out are 'ZoomImageToFit' and the rotate ones. When the do not work as expected, I see no change to the current image in the view, but when my app (/s, I have had this issue before in different apps) changes the image The view reflects the last request from a button. So for example when it does not work. An image is displayed, I click the 'ZoomImageToActualSize:' button, nothing happens. I load a new image and it is displayed at Actual size. Example when it does work. An image is displayed, I click the 'ZoomImageToActualSize:' button, It is displayed at Actual size. By the way, I know I can reset the view default , before I load a new image. Also if I run the 10.5 app in 10.6, it works ok, but as above I get intermittent results with the same build on multible 10.5 machines. Does anyone know what is going on, is this a known bug? Thanks for any help. MH

    Read the article

  • TFS 2010 Workitem auto transition

    - by Luka
    Hi, How can I make work item to auto tansit from new to assigned state when I populate Assigned To field. Ie. I have reported a bug so its state is new. Now I populate (select from dropdown) the Assigned To field, and I want it to automatically transit to Assigned state (Active state). Please help.

    Read the article

  • iPhone UIWebView: loadData does not work with certain types (Excel, MSWord, PPT, RTF)

    - by Thomas Tempelmann
    My task is to display the supported document types on an iPhone with OS 3.x, such as .pdf, .rtf, .doc, .ppt, .png, .tiff etc. Now, I have stored these files only encrypted on disk. For security reasons, I want to avoid storing them unencrypted on disk. Hence, I prefer to use loadData:MIMEType:textEncodingName:baseURL: instead of loadRequest: to display the document because loadData allows me to pass the content in a NSData object, i.e. I can decrypt the file in memory and have no need to store it on disk, as it would be required when using loadRequest. The problem is that loadData does not appear to work with all file types: Testing shows that all picture types seem to work fine, as well as PDFs, while the more complex types don't. I get a errors such as: NSURLErrorDomain Code=100 NSURLErrorDomain Code=102 WebView appears to need a truly working URL for accessing the documents as a file, despite me offering all content via the NSData object already. Here's the code I use to display the content: [webView loadData:data MIMEType:type textEncodingName:@"utf-8" baseURL:nil]; The mime-type is properly set, e.g. to "application/msword" for .doc files. Does anyone know how I could get loadData to work with all types that loadRequest supports? Or, alternatively, is there some way I can tell which types do work for sure (i.e. officially sanctioned by Apple) with loadData? Then I can work twofold, creating a temp unencrypted file only for those cases that loadData won't like. Update Looks like I'm not the first one running into this. See here: http://osdir.com/ml/iPhoneSDKDevelopment/2010-03/msg00216.html So, I guess, that's the status quo, and nothing I can do about it. Someone suggested a work-around which might work, though: http://osdir.com/ml/iPhoneSDKDevelopment/2010-03/msg00219.html Basically, the idea is to provide a tiny http server that serves the file (from memory in my case), and then use loadRequest. This is probably a bit more memory-intensive, though, as both the server and the webview will probably both hold the entire contents in memory as two copies then, as opposed to using loadData, where both would rather share the same data object. (Mind you, I'll have to hold the decrypted data in memory, that's the whole point here).

    Read the article

  • Perks for new programmers

    - by Autobyte
    I intend on hiring 2-3 junior programmers right out of college. Aside from cash, what is the most important perk for a young programmer? Is it games at work? I want to be creative... I want some good ideas

    Read the article

  • Dev environment - Cubicles or pods?

    - by jon
    We're reorganizing our workspaces at work, and are individually being given the choice of working in a more open space with a few other developers, or a more closed off space by ourselves. Which should I choose?

    Read the article

  • Open plan office annoyance

    - by arturito
    Not a technical question, but related to IT. At the moment I work in the open plan office and the guy next to me is talking to himself while programming. It annoys my collegue and me so much that we are putting the earphones on with music volume set to max. Does anyone know good and polite solution to shut him up?

    Read the article

  • Trouble getting SSL to work with django + nginx + wsgi

    - by Kevin
    I've followed a couple of examples for Django + nginx + wsgi + ssl, but I can't get them to work. I simply get an error in my browser than I can't connect. I'm running two websites off the host. The config files are identical except for the ip addresses, server names, and directories. When neither use SSL, they work fine. When I try to listen on 443 with one of them, I can't connect to either. My config files are below, and any suggestions would be appreciated. server{ listen xxx.xxx.xxx.xxx:80; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } server{ listen xxx.xxx.xxx.xxx:443; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; ssl on; ssl_certificate sub.domain.com.crt; ssl_certificate_key sub.domain.com.key; ssl_prefer_server_ciphers on; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol https; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } <VirtualHost *:8080> ServerName xxx.xxx.xxx.xxx ServerAlias xxx.xxx.xxx.xxx LogLevel warn ErrorLog /home/django/logs/apache_customerdb_error.log CustomLog /home/django/logs/apache_customerdb_access.log combined WSGIScriptAlias / /home/django/customerdb/apache/django.wsgi WSGIDaemonProcess customerdb_wsgi processes=4 threads=5 WSGIProcessGroup customerdb_wsgi SetEnvIf X-Forwarded-Protocol "^https$" HTTPS=on </VirtualHost> UDPATE: the existence of two sites (on separate IPs) on the host is the issue. if i delete the other site, the setting above mostly work. doing so also brings up another issue: chrome doesn't accept the site as secure saying that some content is not encrypted.

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • PS/2 to USB adapter doesn't work with Model M keyboard

    - by mickburkejnr
    I bought a server about 3 months ago from a friend, and I have only had time to tinker with it in the last week. I noticed that this server doesn't have any PS/2 ports, which meant configuring it was near impossible. I don't have any USB keyboards in the house, I only have an IBM Model M keyboard (built 1994) and another IBM keyboard that was built circa 2001. Both of them have PS/2 connections. I bought an adapter off eBay, and when I used it with the Model M keyboard the three lights on the keyboard flashed for a split second, but then the keyboard is then unresponsive. I can bash away at the keys for ages and nothing will happen. The same applies to the later built IBM keyboard. What could I do to make the adapter work? I am getting the loan of a USB keyboard in two weeks time, but I'd like a more permanent solution without having to rely on getting the loan of a keyboard every time I have to perform maintenance on the server. And as I already have two keyboards which work fine and I like using, I don't really want to have to buy another keyboard just for use on the server.

    Read the article

  • Pinging an external server through OpenVPN tunnel doesn’t work

    - by qdii
    I have an OpenVPN server and a client, and I want to use this tunnel to access not only 10.0.8.0/24 but the whole internet. So far, pinging the server from the client through the tun0 interface works, and vice versa. However, pinging www.google.com from the client through tun0 doesn’t work (all packets are lost). I figured that I should configure the server so that any packet coming from tun0 in destination of the internet be forwarded, so I came up with this iptables config line: interface_connecting_to_the_internet='eth0' interface_openvpn='tun0' internet_ip_address=`ifconfig "$interface_connecting_to_the_internet" | sed -n s'/.*inet \([0-9.]*\).*/\1/p'` iptables -t nat -A POSTROUTING -o "${interface_connecting_to_the_internet}" -j SNAT --to-source "${internet_ip_address}" echo '1' > /proc/sys/net/ipv4/ip_forward Yet, this doesn’t work, the packets are still lost and I am wondering what could possibly be wrong with my setup. Some details: ip route gives on the server: default via 176.31.127.254 dev eth0 metric 3 10.8.0.0/24 via 10.8.0.2 dev tun0 10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1 127.0.0.0/8 via 127.0.0.1 dev lo 176.31.127.0/24 dev eth0 proto kernel scope link src 176.31.127.109 ip route gives on the client: default via 192.168.1.1 dev wlan0 proto static 10.8.0.1 via 10.8.0.5 dev tun0 10.8.0.5 dev tun0 proto kernel scope link src 10.8.0.6 127.0.0.0/8 via 127.0.0.1 dev lo scope link 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.109 client uses wifi adapter wlan0 and TUN adapter tun0. server uses ethernet adapter eth0 and TUN adapter tun0. the VPN spans on 10.0.8.0/24 both client and linux are using Linux 3.6.1.

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • Sniffing at work- How to detect

    - by coffeeaddict
    Because of the place I work has some real issues (people) especially in IT and the owner, I wonder if we are being sniffed. Is there any way to tell if on a Vista 64-bit machine: 1) In system logs some identification that would tell me that someone might log into my PC such as an Admin 2) Something in the logs that would give me a flag about maybe I'm being monitored some other way? 3) How can I be sure that my gmail, hotmail, and chat is not being sniffed. I know there are things like Simp, etc. I'm talking about specific hidden system signs either in registry or logs. Obviously I'm not going to raise any suspicion by me asking our network admin. I don't trust anyone at this company. is there a good way to basically monitor for this as an end user? Could someone log in and basically watch me work and if so, would there be any goodies left behind for me to find out if this has happened other than visual signs which would not be present...maybe some running processes?

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >