Search Results

Search found 27782 results on 1112 pages for 'team foundation service'.

Page 281/1112 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • New Science and Technology Centers

    NSF supports integrative partnerships that require large-scale, long-term funding to produce research and education of the highest quality National Science Foundation - Education - Science in Society - Educational Resources - United States

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • Challenges of Managing Off Shore Web Development Teams

    Have you ever thought of challenges that may arise in managing a full fledged team of professionals who are located thousands of miles away from your official location? The problem of skillfully managing an official team of your company is quite an uphill task and can give rise to numerous problems.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • Cross-platform restart of Apache

    - by l0b0
    I'd like to have a single command that'll restart Apache on any *nix OS. Currently I'm working with Ubuntu, which has /usr/sbin/apache2ctl /usr/sbin/service no apachectl no httpd and Scientific Linux CERN 5, which has /usr/sbin/apachectl /etc/init.d/httpd no apache2ctl no service I'd like to avoid using a hack like which service 2>/dev/null || which /etc/init.d/httpd

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • How do you google bad reviews? [closed]

    - by zlog
    The first thing I like to do before I make technology choices or gadget purchases is find bad reviews of the things. That way, taking into consideration of the potential biases of the reviewer, I can evaluate the product/service for the worse it can be. How do people google (or search in general) for bad reviews? I usually just try googling "bad review [product/service]" or "[product/service] sucks", but I'm sure there are better ways.

    Read the article

  • WebCenter Content Technical Resources

    - by John Klinke
    Some useful WebCenter Content technical resources that you might not be aware of: WebCenter Content Support's Twitter Account: Follow Oracle WebCenter Content Support on Twitter @OrcleWCCSupport to keep informed about webcasts, patches, new releases, tips and tricks, and more. Oracle WebCenter & ADF Architecture Team Blog: Frequent posts by the Oracle Fusion Middleware A-Team discussing WebCenter implementation and configuration topics.  

    Read the article

  • Custom ecommerce solution (similar to newegg.com) price [closed]

    - by Andrew
    I am wondering, how much would it cost me to hire a team of developers who could make improved clone of newegg.com ecommerce solution together with the back end? Please give comments based on your experience and realistic measures. Comments like "hire team of freelancers from India -they will do it for $1,000" are not welcome. I am talking about using realistic, quality and proven developers. Maybe good advise on how to approach ecommerce development? Thank you

    Read the article

  • How to implement smart card authentication with a .NET Fat client?

    - by John Nevermore
    I know very little about smart card authentication in general so please point out or correct me if anything below doesn't make sense. Lets say i have: A Certificate Authority "X"-s smart card (non-exportable private key) Drivers for that smart card written in C A smart card reader CA-s authentication OCSP web service A requirement to implement user authentication in a .NET fat client application via a smart card, that was given out by the CA "X". I tried searching info on the web but no prevail. What would the steps be ? My first thought was: Set up a web service, that would allow saving of (for example) scores of a ping pong game for each user. Each time someone tries to submit a score via the client application, he can only do so by inserting the smart card into the reader. Then the public key is read from the smart card by native c calls through .NET and sent to my custom web service, which in return uses the CA-s authentication OCSP web service to prove the validity of the public key/public certificate (?). If the public key is okay and valid, encrypt a random sequence of bytes with the public key and send it to the client application. If the client application sends back the correctly decrypted random sequence of bytes along with the score of the ping pong game, then the score is saved in the database for the given user. My question is, is this the correct way to do it ? What else should i know about smart card authentication ?

    Read the article

  • MySql server answer #2002

    - by LOIC
    Since this morning, phpmyadmin is giving me the error message #2002 the server doesn't answer (or the connection to local Mysql is not well configured) and a message about control-use. I'm disappointed it used to work until 2am last night and now the MySQL engine doesn't want to start (told me sometimes about sockets ..) LAMPP is installed on a ubuntu 12.04 lm@Famou:~$ sudo service mysqld status mysqld: unrecognized service    & lm@Famou:~$ sudo service mysqld start mysqld: unrecognized service  : It never works with 'service' !!! and root@Famou:/opt/lampp# /opt/lampp/lampp restart Stopping XAMPP for Linux 1.7.7... XAMPP: Stopping Apache with SSL... XAMPP: XAMPP-MySQL is not running. XAMPP: Stopping ProFTPD... XAMPP stopped. Starting XAMPP for Linux 1.7.7... XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Couldn't start MySQL! XAMPP: Starting ProFTPD... XAMPP for Linux started. is the result of restart lampp

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • Is there a perf hit using mule as a container vs. standard JEE container like Weblogic?

    - by Victor Grazi
    Our team is considering using Mule in a large scale medium volume internal facing transactional banking application. At first Mule would just be used as an application server although it is possible some of its esb/orchestration features would be used in the future. I have no experience with mule, being new on the team. But my gut says Mule would not be as performant as Weblogic or Glassfish as a deployment container. Does anyone have any comparison stories to share that might shed light?

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • SQL Saturday #220 Atlanta May 2013!

    - by Most Valuable Yak (Rob Volk)
    If you love SQL Server training and are near the Atlanta area, or just love us so much you're willing to travel here, please come join us for: SQL SATURDAY #220! The main event is Saturday, May 18.  The event is free, with a $10.00 lunch fee.  The main page has more details here: http://www.sqlsaturday.com/220/eventhome.aspx We are also offering pre-conference sessions on Friday, May 17, by 5 world-renowned presenters: Denny Cherry: SQL Server Security Register! Site Twitter Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance Register! Site Twitter Stacia Misner: Languages of BI Register! Site Twitter Bill Pearson: Practical Self-Service BI with PowerPivot for Excel Register! Site Twitter Eddie Wuerch: The DBA Skills Upgrade Toolkit Register! Site Twitter         We have an early bird registration price of $119 until noon EST Friday, March 22.  After that the price goes to $149, a STEAL when you compare it to the PASS Summit price. :) Please click on the links to register and for more information.  You can also follow the hash tag #SQLSatATL on Twitter for more news about this event. Can't wait to see you all there!

    Read the article

  • Nagios test of smtp configuration

    - by Funky Si
    Is there a way of configuring a nagios check that a smtp service is correctly configured and emails are going out. I have a check that the service is running, but recently we noticed that the configuration had been altered and no emails where going out, but the service was still running. One idea I had was to schedule a regular email to be sent, is it possible for nagios to check for that email and throw an alert if it didn't detect it? Any other ideas to monitor this gratefully received.

    Read the article

  • Configuring Nagios BGP plugin on Ubuntu

    - by user141610
    I am trying to configure nagios check_bgp_neighbors plug-in on Ubuntu and followed README file of check_bgp_neighbors plug-in. I have made following changes: define command{ command_name check_bgp_all command_line $USER1$/check_bgp_neighbors -H $HOSTADDRESS$ -C $USER3$ -n $ARG1$ -n $ARG2$ } to define command{ command_name check_bgp_all command_line /usr/local/nagios/libexec/check_bgp_neighbors.sh -H xx.xx.xx.49 -C xx.xx.xx.50 And define service{ use server-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!10.0.0.1!172.16.0.2 } to define service{ use generic-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!xx.xx.xx.50 } xx.xx.xx.49 is the IP of the host router and xx.xx.xx.50 is the IP of eBGP neighbour. After that it shows critical status. I know my command is not correct but cannot detect the problem. I learned that in this plug-in user-name and password of the host router are required but don't know how and where to provide it. Nagios log does not show any error message. Status information: Failed: status:0 prefixes:0 sent:0 received:0

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by GeneEun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. Yes, the Oracle Database 12c announcement was also huge, but for most it was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Gradle/NetBeans

    - by Geertjan
    The Gradle team (thanks Szczepan and others) fixed a crucial bug impacting, at least, NetBeans IDE and so now the Gradle/NetBeans plugin is coming along OK. Click to enlarge the screenshot and notice the dependencies and Gradle targets shown in the left side of the screenshot and the Groovy editor on the right hand side: Still lots of work to do, especially would like to have a Gradle file for NetBeans RCP projects, which Hans Dockter from the Gradle team is helping me with.

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • Moving Microsoft Exchange server to the private network.

    - by Alexey Shatygin
    In one of the offices, we have a 50-computers network, which had only one server machine: Windows 2003 Server Microsoft ISA Server Microsoft Exchange 2003 This server worked as a gateway (proxy server), mail server, file server, firewall and domain controller. It had two network interfaces, one for WAN (let's say 222.222.222.222) and one for LAN (192.168.1.1). I set up a Linux box to be the gateway (without a proxy), so the Linux box now has the following interfaces: 222.222.222.222 (our external IP, we removed it from the Windows machine) and 192.168.1.100 (internal IP), but we need to keep the old Windows server as a mail server and a proxy for some of our users, until we prepare another Linux machine for that, so I need the mail server on that machine to be available from the Internet. I set up iptables rules to redirect all the incoming connections on the 25th and 110th ports of our external IP to 192.168.1.1:25 and 192.168.1.1:110 and when I try to telnet our SMTP service telnet 222.222.222.222 25 I get the greetings from our windows server's (192.168.1.1) SMTP service, and that's works fine. But when I telnet POP3 service telnet 222.222.222.222 110 I only get the blank black screen and the connection seem to disappear if I press any button. I've checked the ISA rules - everything seems to be the same for 110th and 25th ports. When I telnet on 110th ports of our Windows server from our new gateway machine like this: telnet 192.168.1.1 110 I get the acces to it's POP3 service: +OK Microsoft Exchange Server 2003 POP3 server version 6.5.7638.1 (...) ready. What sould I do, to make the POP3 service available through our new gateway?

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >