Search Results

Search found 13452 results on 539 pages for 'django testing'.

Page 141/539 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Django + dbxml + Apache = problems. Any solutions?

    - by Jason
    I'm trying to set up a Django application using WSGI. That works fine. However, I am having some issues with part of my Django app that uses BDB XML. My Apache config is as follows: Listen 8000 WSGISocketPrefix /tmp/wsgi <VirtualHost *:8000> ServerName <server name> DocumentRoot <path to doc root> LogLevel info WSGIScriptAlias / <path to wsgi> WSGIApplicationGroup %{GLOBAL} WSGIDaemonProcess debug threads=1 WSGIProcessGroup debug </VirtualHost> However, I'm still getting the following error: DB_ENV->repmgr_stat interface requires an environment configured for the replication subsystem [error] child died with signal 11 My environment is opened as: environment = DBEnv() environment.open( <absolute db env path>, DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL, 0 ) I am using: python 2.6.2 apache 2.2 ubuntu 9.04 dbxml 2.5.13 compiled from source (so libdb-4.8, bsddb3, all that jazz) I see Apache seems to link to libdb-4.6. Is this a problem? ldd /usr/sbin/apache2 | grep libdb libdb-4.6.so => /usr/lib/libdb-4.6.so (0xb7c01000) Updated Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb5a48b90 (LWP 12700)] 0x00000000 in ?? () (gdb) thread apply all bt Thread 4 (Thread 0xb6a67b90 (LWP 12698)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7afee in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 3 (Thread 0xb6249b90 (LWP 12699)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7ab39 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 2 (Thread 0xb5a48b90 (LWP 12700)): #0 0x00000000 in ?? () #1 0xb4f03b5e in DbXml::XmlManager::XmlManager () from /home/jason/dbxml-2.5.13/install/lib/libdbxml-2.5.so #2 0xb501b29b in _wrap_new_XmlManager (self=0x0, args=0xac66fcc) at dbxml_python_wrap.cpp:5183 #3 0xb6b77aed in PyCFunction_Call () from /usr/lib/libpython2.6.so.1.0 #4 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #5 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #6 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #7 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #8 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #9 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #10 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #11 0xb6b9ae03 in ?? () from /usr/lib/libpython2.6.so.1.0 #12 0xb6b90f55 in ?? () from /usr/lib/libpython2.6.so.1.0 #13 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #14 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #15 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #16 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #17 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #18 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #19 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #20 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #21 0xb6b44a7d in PyInstance_New () from /usr/lib/libpython2.6.so.1.0 #22 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #23 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #24 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #25 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #26 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #27 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #28 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #29 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #30 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #31 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #32 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #33 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #34 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #35 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #36 0xb6bdab4f in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #37 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #38 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #39 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #40 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #41 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #42 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #43 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #44 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #45 0xb6d7172d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #46 0xb6d7539f in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #47 0xb6d7e1d8 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #48 0xb6d7a42c in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #49 0xb6d7a8bd in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #50 0xb6d7a9c5 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #51 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #52 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #53 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 1 (Thread 0xb7460b00 (LWP 12697)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7e75300 in sigwait () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7ea3f3b in apr_signal_thread () from /usr/lib/libapr-1.so.0 #3 0xb6d7b48d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb6d7bc98 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #5 0xb6d79632 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #6 0xb7e9a2c9 in apr_proc_other_child_alert () from /usr/lib/libapr-1.so.0 #7 0x08092202 in ap_mpm_run () #8 0x080673c8 in main () #0 0x00000000 in ?? ()

    Read the article

  • Django: How do I go about changing my simple app to use Ajax?

    - by swisstony
    I currently have a web page where the user enters some data and then clicks a submit button. I process the data in views.py and then use the same Django template to return and display the original data and the results. What I would like to do is try to give it a bit more of a modern look and feel. You know the sort of thing, the page doesn't refresh but displays a spinning disk until the results are displayed. I assume this means using Ajax? How difficult is it to modify a simple app like this to use Ajax? What is involved? What are the best tools to use? JQuery?

    Read the article

  • How to read back and print text with newlines from a Python (Django) string with HTML?

    - by user1801486
    If someone types in a phrase, such as: I see you driving round town with the girl I love, and I’m like: haiku. (no blank lines between each line, but the text is written on three separate lines) into a text box on a web page, and then presses a button which is then stored in a database via Django, and that string is read back and printed on a page, how can I get it to print on an HTML page with the newlines still in the text? So instead of it being printed back as: I see you driving round town with the girl I love, and I’m like: haiku. It would print as: I see you driving round town with the girl I love, and I’m like: haiku. I know that if I use: (textarea)soAndSo.body(/textarea), this preserves the newlines that were in the file when the user typed it up originally. How can I get this same effect, but without having to use textarea boxes?

    Read the article

  • Installing Djnajo/Python on IIS6

    - by Sohrab Hejazi
    We are currently installing the latest version of Django and Python on IIS6. We have followed the instructions on the following site: http://code.djangoproject.com/wiki/DjangoOnWindowsWithIISAndSQLServer We are receiving a 403 error when trying to access our Django application via the IIS server. We have verified the python installation on IIS6 and it is working property. We have also verified the Django installation. Our application runs fine under the built-in Django server, but we are having difficulties getting it to run under IIS. We presume we could be getting errors from "Linking Django to PyISAPIe" section of the instructions provided on the link above. Thank.

    Read the article

  • How to create custom CSS "on the fly" based on account settings in a Django site?

    - by sdolan
    So I'm writing a Django based website that allows users select a color scheme through an administration interface. I already have middleware/context processors that links the current request (based on domain) to the account. My question is how to dynamically serve the CSS with the account's custom color scheme. I see two options: Add a CSS block to the base template that overrides the styles w/variables passed in through a context processors. Use a custom URL (e.g. "/static/dynamic/css//styles.css") that gets routed to a view that grabs all the necessary values and creates the css file. I'm content with either option, but was wondering if anyone else out there has dealt with similar problems and could give some insight as to "Best Practices".

    Read the article

  • How to get the app a Django model is from?

    - by e-satis
    I have a model with a generic relation: TrackedItem --- genericrelation ---> any model I would like to be able to generically get, from the initial model, the tracked item. I should be able to do it on any model without modifying it. To do that I need to get the content type and the object id. Getting the object id is easy since I have the model instance, but getting the content type is not: ContentType.object.filter requires the model (which is just content_object.__class__.__name__) and the app_label. I have no idea of how to get in a reliable way the app in which a model is. For now I do app = content_object.__module__.split(".")[0], but it doesn't work with django contrib apps.

    Read the article

  • How can I receive percent encoded slashes with Django on App Engine?

    - by J. Frankenstein
    I'm using Django with Google's App Engine. I want to send information to the server with percent encoded slashes. A request like http:/localhost/turtle/waxy%2Fsmooth that would match against a URL like r'^/turtle/(?P<type>([A-Za-z]|%2F)+)$'. The request gets to the server intact, but sometime before it is compared against the regex the %2F is converted into a forward slash. What can I do to stop the %2Fs from being converted into forward slashes? Thanks!

    Read the article

  • How to setup custom CSS based on account settings in a Django site?

    - by sdolan
    So I'm writing a Django based website that allows users select a color scheme through an administration interface. I already have middleware/context processors that links the current request (based on domain) to the account. My question is how to dynamically serve the CSS with the account's custom color scheme. I see two options: Add a CSS block to the base template that overrides the styles w/variables passed in through a context processors. Use a custom URL (e.g. "/static/dynamic/css//styles.css") that gets routed to a view that grabs all the necessary values and creates the css file. I'm content with either option, but was wondering if anyone else out there has dealt with similar problems and could give some insight as to "Best Practices".

    Read the article

  • how to create english language dictionary application with python (django)?

    - by sintaloo
    Hi All, I would like to create an online dictionary application by using python (or with django). It will be similar to http://dictionary.reference.com/. My question is (1) Are there any existing open source python package or modules or application which implements this functionality that I can use or study from? (2) If the answer to the first question is NO. which algorithm should I follow to create such web application? Can I simply use the python built-in dictionary object for this job? so that the dictionary object's key will be the english word and the value will be the explanation. is this OK in term of performance? OR Do I have to create my own Tree Object to speed up the search? or any existing package which handles this job properly? Thank you very much.

    Read the article

  • In Django, using __init__() method of non-abstract parent model to record class name of child model

    - by k-g-f
    In my Django project, I have a non-abstract parent model defined as follows: class Parent(models.Model): classType = models.CharField(editable=False,max_length=50) and, say, two children models defined as follows: class ChildA(Parent): parent = models.OneToOneField(Parent,parent_link=True) class ChildB(Parent): parent = models.OneToOneField(Parent,parent_link=True) Each time I create an instance of ChildA or of ChildB, I'd like the classType attribute to be set to the strings "ChildA" or "ChildB" respectively. What I have done is added an _ _ init_ _() method to Parent as follows: class Parent(models.Model): classType = models.CharField(editable=False,max_length=50) def __init__(self,*args,**kwargs): super(Parent,self).__init__(*args,**kwargs) self.classType = self.__class__.__name__ Is there a better way to implement and achieve my desired result? One downside of this implementation is that when I have an instance of the Parent, say "parent", and I want to get the type of the child object linked with "parent", calling "parent.classType" gives me "Parent". In order to get the appropriate "ChildA" or "ChildB" value, I need to write a "_getClassType()" method to wrap a custom sql query.

    Read the article

  • django - where to clean extra whitespace from form field inputs?

    - by Westerley
    I've just discovered that Django doesn't automatically strip out extra whitespace from form field inputs, and I think I understand the rationale ('frameworks shouldn't be altering user input'). I think I know how to remove the excess whitespace using python's re: #data = re.sub('\A\s+|\s+\Z', '', data) data = data.strip() data = re.sub('\s+', ' ', data) The question is where should I do this? Presumably this should happen in one of the form's clean stages, but which one? Ideally, I would like to clean all my fields of extra whitespace. If it should be done in the clean_field() method, that would mean I would have to have a lot of clean_field() methods that basically do the same thing, which seems like a lot of repetition. If not the form's cleaning stages, then perhaps in the model that the form is based on? Thanks for your help! W.

    Read the article

  • Google App Engine & Django Sandbox: Shell and Web seem to be using different datastores?

    - by tones
    I'm new to both Django and Google App Engine, and am using a sandbox in OSX10.6 with the GoogleAppEngineLauncher. I've got a basic "bookstore" application running from the tutorial in the OReilly "Programming Google App Engine" book. Here's the bug: If I add a new object to the datastore through the web interface, then it's readable through the web interface, but does not appear to exist if I query the datastore through the shell. Vice versa: If I add an object in the shell, then I can read it from the shell, but it doesn't appear in the web interface. Any thoughts or theories would be welcome. Thanks! =T=

    Read the article

  • Django Project Done and Working. Now What?

    - by Rodrogo
    Hi, I just finished what I would call a small django project and pretty soon it's going live. It's only 6 models but a fairly complex view layer and a lot of records saving and retrieving. Of course, forgetting the obvious huge amount of bugs that will, probably, fill my inbox to the top, what would it be the next step towards a website with best performance. What could be tweaked? I'm using jmeter a lot recently and feel confident that I have a good baseline for future performance comparisons, but the thing is: I'm not sure what is the best start, since I'm a greedy bastard that wants to work the least possible and gather the best results. For instance, should I try an approach towards infrastructure, like a distributed database, or should I go with the code itself and in that case, is there something that specifically results in better performance? In your experience, whats pays off more? Personal anecdotes are welcome, but some fact based opinions are even more. :) Thanks very much.

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • How to Be a Software Engineer?

    - by Mistrio
    My problem is kind of weird so please bear with me. I have been working in a start up concerned basically with mobile development since my graduation 2 years ago. I develop apps for iOS but it's not really relevant. The start up structure is simply founders developers, with no middle-tier technical supervision or project management whatsoever. A typical project cycle of ours is like this: meet with a client send very vague recruitment to an outsourced graphics designer dig in development right after we get the design, no questions asked then improvise improvise improvise! It's not that we are unaware that stuff like requirements analysis, UML, design patterns, source code control, testing, development methodologies... etc. exist, we just simply don't use them, and I mean like never. The result is usually a clunk of hardly-maintainable yet working code. Despite everything we are literally flourishing with many successful apps on all platforms and bigger clients each project. The thing is, we want the chaos and we're looking for advice. How would you fix our company technically? Given that you can't hire project managers or team leaders just because we are barely 5 developers, so it wouldn't be a justified cost for the founders, but one-time things like courses, books, private training... etc is an option. Lastly, if it's relevant, we are based in Egypt. Thank you a lot in advance.

    Read the article

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >