Search Results

Search found 8411 results on 337 pages for 'django shell'.

Page 134/337 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • emacs: Inferior-mode python-shell appears "lagged"

    - by Begbie00
    Hi all - I'm a Python(3.1.2)/emacs(23.2) newbie teaching myself tkinter using the pythonware tutorial found here. Relevant code is pasted below the question. Question: when I click the Hello button (which should call the say_hi function) why does the inferior python shell (i.e. the one I kicked off with C-c C-c) wait to execute the say_hi print function until I either a) click the Quit button or b) close the root widget down? When I try the same in IDLE, each click of the Hello button produces an immediate print in the IDLE python shell, even before I click Quit or close the root widget. Is there some quirk in the way emacs runs the Python shell (vs. IDLE) that causes this "lagged" behavior? I've noticed similar emacs lags vs. IDLE as I've worked through Project Euler problems, but this is the clearest example I've seen yet. FYI: I use python.el and have a relatively clean init.el... (setq python-python-command "d:/bin/python31/python") is the only line in my init.el. Thanks, Mike === Begin Code=== from tkinter import * class App: def __init__(self,master): frame = Frame(master) frame.pack() self.button = Button(frame, text="QUIT", fg="red", command=frame.quit) self.button.pack(side=LEFT) self.hi_there = Button(frame, text="Hello", command=self.say_hi) self.hi_there.pack(side=LEFT) def say_hi(self): print("hi there, everyone!") root = Tk() app = App(root) root.mainloop()

    Read the article

  • How to avoid shell execution of a Process?

    - by adeel amin
    When I start a process like process = Runtime.getRuntime().exec("gnome-terminal");, it starts shell execution. I want to stop shell execution and want to redirect I/O from process, can anybody tell how I can do this? My code is: public void start_process() { try { process= Runtime.getRuntime().exec("gnome-terminal"); pw= new PrintWriter(process.getOutputStream(),true); br=new BufferedReader(new InputStreamReader(process.getInputStream())); err=new BufferedReader(new InputStreamReader(process.getErrorStream())); } catch (Exception ioe) { System.out.println("IO Exception-> " + ioe); } } public void execution_command() { if(check==2) { try { boolean flag=thread.isAlive(); if(flag==true) thread.stop(); Thread.sleep(30); thread = new MyReader(br,tbOutput,err,check); thread.start(); }catch(Exception ex){ JOptionPane.showMessageDialog(null, ex.getMessage()+"1"); } } else { try { Thread.sleep(30); thread = new MyReader(br,tbOutput,err,check); thread.start(); check=2; }catch(Exception ex){ JOptionPane.showMessageDialog(null, ex.getMessage()+"1"); } } } private void jButton3ActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: command=tfCmd.getText().toString().trim(); pw.println(command); execution_command(); } When I enter some command in textfield and press execute button, nothing is displayed on my output textarea, how I can stop shell execution and redirect input and output?

    Read the article

  • How does key-based caching work?

    - by Dominic Santos
    I recently read an article on the 37Signals blog and I'm left wondering how it is that they get the cache key. It's all well and good having a cache key that includes the object's timestamp (this means that when you update the object the cache will be invalidated); but how do you then use the cache key in a template without causing a DB hit for the very object that you are trying to fetch from the cache. Specifically, how does this affect One to Many relations where you are rendering a Post's Comments for example. Example in Django: {% for comment in post.comments.all %} {% cache comment.pk comment.modified %} <p>{{ post.body }}</p> {% endcache %} {% endfor %} Is caching in Rails different to just requests to memcached for example (I know that they convert your cache key to something different). Do they also cache the cache key?

    Read the article

  • Should I work on CSS or apps first?

    - by Fred
    I am working on my own website for my business, and I need to contract out some assistance. For example, I have the site looking pretty good on Firefox, but it needs help on other browsers. I also need some help with adding some Django apps to the site and setting up a database. I plan to seek the help of two different individuals via elance or odesk. My question is which to do first - get the css and html right then do the apps, or get the apps done and then work on the css and html? Thanks in advance for any suggestions.

    Read the article

  • How do I learn Python from zero to web development? [closed]

    - by Terence Ponce
    I am looking into learning Python for web development. Assuming I already have some basic web development experience with Java (JSP/Servlets), I'm already familiar with web design (HTML, CSS, JS), basic programming concepts and that I am completely new to Python, how do I go about learning Python in a structured manner that will eventually lead me to web development with Python and Django? I'm not in a hurry to make web applications in Python so I really want to learn it thoroughly so as not to leave any gaps in my knowledge of the technologies involving web development in Python. Are there any books, resource or techniques to help me in my endeavor? In what order should I do/read them? UPDATE: When I say learning in a structured manner, I mean starting out from the basics then learning the advanced stuff without leaving some of the important details/features that Python has to offer. I want to know how to apply the things that I already know in programming to Python.

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • 'module' object has no attribute 'PY2'

    - by ManikandanV
    I am using ubuntu 14.04, was trying to install python-memcache. I have got an error like Downloading/unpacking python-memcached Downloading python-memcached-1.53.tar.gz Cleaning up... Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1229, in prepare_files req_to_install.run_egg_info() File "/usr/lib/python2.7/dist-packages/pip/req.py", line 292, in run_egg_info logger.notify('Running setup.py (path:%s) egg_info for package %s' % (self.setup_py, self.name)) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 284, in setup_py if six.PY2 and isinstance(setup_py, six.text_type): AttributeError: 'module' object has no attribute 'PY2' Storing debug log for failure in /home/mani/.pip/pip.log I am getting the same error when installing Django-celery, pymongo etc

    Read the article

  • How do you handle domain logic that spans multiple model objects in an ORM?

    - by duality_
    So I know that business logic should be placed in the model. But using an ORM it is not as clear where I should place code that handles multiple objects. E.g. let's say we have a Customer model which has a type of either sporty or posh and we wanted to customer.add_bonus() to every posh customer. Where would we do this? Do we create a new class to handle all this? If yes, where do we put it (alongside all the other model classes, but not subclass it from the ORM?)? I'm currently using django framework in python, so specific suggestions are even more wanted.

    Read the article

  • What kind of authorization I should use for my facebook application

    - by JSmith
    I am building a social reader Facebook application using Django where I am using Google Data API (Blogger API). But I am unable to deal with the authorization step to use the API (currently using ClientLogin under development). I tried to read the OAuth documentation but couldn't figure out how to proceed. I don't want my users to provide any login credentials for google.. which makes the app completely absurd. So, can anyone help me on my project and tell me what kind of authorization I should actually use and how ? (I am using gdata lib)

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • Is backing up a MySQL database in GIT a good idea?

    - by wobbily_col
    I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in synch. But GIT is a designed for code, not for data. As such it will be doing a lot of extra work diffing the mysql dump every commit, which is not really necessary. If I compress the file before storing it, will git diff the files? (The dump file is currently 100MB uncompressed, 5.7Mb when bzipped). Edit: the code and database schema definitions are already in GIT, it is really the data I am concerned about backing up now.

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • How can I disable the CTRL-ALT-DEL key combination completely on XP/Vista/7?

    - by Travesty3
    I have been googling extensively to figure this out, and nobody seems to be able to give a direct answer. Let me start by saying that I'm NOT talking about requiring CTRL-ALT-DEL to enter logon information. I'm working on a golf simulator program which is used at golf centers. I need the ability to completely disable the CTRL-ALT-DEL key sequence so that the golf center customers can't get out of the program and access the computer at all. I realize there are other key combinations that need to be handled as well, we already have this entire feature working in XP, but we're going to be switching to Windows 7 soon, and CTRL-ALT-DEL is the only one that doesn't seem to work in Win7. I'd really like an all-around solution if at all possible. This same program may also be installed on a client's personal computer for an in-home golf simulator, but the computers that really need this feature (golf center computers) are provided to the golf center by us, so would the best option be to write a new shell? I don't know anything about that at all, other than others that suggest writing a new shell for kiosk mode. I'd really like a simpler option, like modifying the registry in some way. I have heard that you can remove some buttons from the menu screen that pops up, but unless I can remove pretty much all of them (including the shutdown/restart button in the bottom-right corner), this won't be enough of a solution for me. Thanks for taking the time to read this and thanks again for any help you could provide! -Travis

    Read the article

  • py.test import context problems (causes Django unit test failure)

    - by dhill
    I made a following test: # main.py import imported print imported.f.__module__ # imported.py def f(): pass # test_imported.py (py.test test case) import imported def test_imported(): result = imported.f.__module__ assert result == 'imported' Running python main.py, gives me imported, but running py.test gives me error and result value is moduletest.imported (moduletest is the name of the directory I keep the test in. It doesn't contain __init__.py, moduletest is the only directory containing *.py files in ~/tmp). How can I fix result value? The long story: I'm getting strange errors, while testing Django application. A call to reverse() from (django.urlresolvers). with function object foo as argument in tests crashes with NoReverseMatch: Reverse for 'site.app.views.foo'. The same call inside application works. I checked and it is converted to 'app.views.foo' (without site prefix). I first suspected my customised test setup for Django, but then I made above test.

    Read the article

  • ksh Auto-Completion PuTTY Configuration

    - by Nitrodist
    I'm having a bit of a problem configuring my PuTTY client to work with the auto-completion feature in the ksh shell. I do a listing on the root with the directories /home and /homeroot and it returns the directories in a list just fine. I can't select it, though, by hitting X = (where X is the number). /home/nitrodist>ls /h #hits esc + = 1) home/ 2) homeroot/ #hits 2 + = for the 'homeroot' dir 1) home/ 2) homeroot/ #hits just the '=' key. 1) home/ 2) homeroot/ Any ideas? I've su -'d to another user who can actually do it with their PuTTY session and I can't do it there, which makes me think it's a PuTTY configuration issue. This is running on a ksh93 shell on HP-UX, if that makes any difference. Here's my ksh config: /home/campbelm>set -o Current option settings allexport off bgnice on emacs off errexit off gmacs off ignoreeof off interactive on keyword off markdirs off monitor on noexec off noclobber off noglob off nolog off notify off nounset off privileged off restricted off trackall off verbose off vi on viraw on xtrace off /home/campbelm>

    Read the article

  • ssh initial prompt hangs for 10 minutes but console login and initial prompt is very responsive - why?

    - by rfreytag
    I have been running an ESXi 4.0 server for months with a couple of WinServer2003 and several Ubuntu Server 10.4 VMs. The performance has been impressive on 6GB i7 Asus P6T hardware. Suddenly, a week ago, ssh logins to the Ubuntu VMs take 10 minutes when connecting over the LAN (over a WAN the connection (pipe) is broken long before that). When logging in to these VMs the password prompt arrives immediately, and failed passwords are responded to immediately. But the moment I log in then the shell prompt appears and I hang for many minutes. Sometimes the connection hangs before the shell prompt appears and sometimes I can type in a command but the moment I hit return the machine hangs. 10 full minute later control returns and the VM is responsive. NOTE: there are several Ubuntu VMs on the same host machine that are identical in all ways that I can tell. However, only one of the VMs displays this behavior. That is why I mention the ESXi host in passing - I don't think it has anything to do with the problem. This behavior is never seen when I connect with the troubled-VM's console (through vSphere Client). From the console the Ubuntu VMs all respond beautifully. I have seen: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1003496&sliceId=1&docTypeID=DT_KB_1_1&dialogID=229586372&stateId=1%200%20229588522 ...and since that relates to delays in seeing the password prompt that does not appear to be the solution here. Any other suggestions very welcome - thank you.

    Read the article

  • Sudo won't execute command as another user

    - by TOdorus
    I'm trying to get a unicorn server to start when the server boots. I've created a shell script which works if I log as the ubuntu user and run /etc/init.d/unicorn start Shell script #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac When I rebooted the server I noticed that the unicorn server wasn't listening to a socket. Since I ran the code succesfully as the ubuntu user I modified the script to let it always use the ubuntu user via sudo. #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ sudo -u ubuntu unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then sudo -u ubuntu kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac After rebooting unicorn still wouldn't start, so I tried running the script from the command line. Now I get the following error sudo: unicorn_rails: command not found I've searched high and low to what could cause this, but I'm afraid I've tapped my limited understanding of Linux. From what I can understand is that although sudo should use the ubuntu user to execute the commands, it still uses the environment of the root user, which isn't configured to run ruby or unicorn. Does anybody have any experience with this?

    Read the article

  • Windows + Django + mod_wsgi = "DLL load failed"

    - by Kyle MacFarlane
    For a long time I was using Python 2.5 to do all this fine but recently upgraded to 2.7 since building stuff for 2.5 is a real pain. I also updated mod_wsgi to 3.3 for Python 2.7. Everything is working fine with Apache + mod_wsgi on CentOS and also in the Django runserver on both Windows and CentOS, but not with Apache + mod_wsgi on Windows. Whenever I try to access a page in my Django app I get the following (note that Apache starts fine): ImportError at / DLL load failed: The specified module could not be found. Which is caused by things like: from Crypto.Cipher import AES Etree and others cause the exact same error and it is not limited to any specific packages. Anything with pyd files fails. Googling around suggests reinstalling Python "for all users", but the installer doesn't give you that option anymore anyway. For good measure I've tried reinstalling Python 2.7 as an administrator and also told it to register itself as the default version of Python but neither helped. I think the solution might have something to do with: The fact that I have 2.5, 2.6 and 2.7 installed on this machine and mod_wsgi might be loading the DLLs for 2.5 instead of 2.7. Something to do with WSGIPythonPath, which I usually don't need to set.

    Read the article

  • Batch files - number of command line arguments

    - by pyko
    Just converting some shell scripts into batch files and there is one thing I can't seem to find...and that is a simple count of the number of command line arguments. eg. if you have: myapp foo bar In Shell: $# - 2 $* - foo bar $0 - myapp $1 - foo $2 - bar In batch ?? - 2 <---- what command?! %* - foo bar %0 - myapp %1 - foo %2 - bar So I've looked around, and either I'm looking in the wrong spot or I'm blind, but I can't seem to find a way to get a count of number of command line arguments passed in. Is there a command similar to shell's "$#" for batch files? ps. the closest i've found is to iterate through the %1s and use 'shift', but I need to refernece %1,%2 etc later in the script so that's no good.

    Read the article

  • How can I extract and save values from an XML file in Perl?

    - by Freddy
    Here is what I am trying to do in a Perl script: $data=""; sub loadXMLConfig() { $filename="somexml.xml" $data = $xml-XMLin($filename); } sub GetVariable() { ($FriendlyName) = @_; switch($FriendlyName) { case "My Friendly Name" {print $data-{my_xml_tag_name}} .... .... .... } } The problem is I am using Perl just because I am reading from an XML file, but I need to get these variables by a shell script. So, here is what I am using: $ perl -e 'require "scrpt.pl"; loadConfigFile(); GetVariable("My Variable")' This works exactly as expected, but I need to read the XML file every time I am getting a variable. Is there a way I could "preserve" $data across shell calls? The idea is that I read the XML file only once. If no, is there is a more simple way I could do this? These are the things I can't change: Config File is an XML Need the variables in a shell script

    Read the article

  • Self-modify the classpath within a Scala script?

    - by Alex R
    I'm trying to replace a bunch of Linux shell scripts with Scala scripts. One of the remaining challenges is how to scan an entire directory of JARs and place them into the classpath. Currently this is done in the shell script prior to invoking the scala JVM. I'd like to eliminate the shell script completely. Is there an elegant scala idiom for this? I have found this other question but in Java it seems hardly worthwhile to mess with it: http://stackoverflow.com/questions/252893/how-do-you-change-the-classpath-within-java

    Read the article

  • Multi MVC processing vs Single MVC process

    - by lordg
    I've worked fairly extensively with the MVC framework cakephp, however I'm finding that I would rather have my pages driven by the multiple MVC than by just one MVC. My reason is primarily to maintain an a more DRY principle. In CakePHP MVC: you call a URL which calls a single MVC, which then calls the layout. What I want is: you call a URL, it processes a layout, which then calls multiple MVC's per component/block of html on the page. When you compare JavaScript components, AJAX, and server side HTML rendering, it seems the most consistent method for building pages is through blocks of components or HTML views. That way, the view block could be situated either on the server or the client. This is technically my ONLY disagreement with the MVC model. Outside of this, IMHO MVC rocks! My question is: What other RAD frameworks follow the same principles as MVC but are driven rather by the View side of MVC? I've looked at Django and Ruby on Rails, yet they seems to be more Controller driven. Lift/Scala appears to be somewhat of a good fit, but i'm interested to see what others exist.

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >