Search Results

Search found 26190 results on 1048 pages for 'py test'.

Page 156/1048 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • How to setup and teardown temporary django db for unit testing?

    - by blokeley
    I would like to have a python module containing some unit tests that I can pass to hg bisect --command. The unit tests are testing some functionality of a django app, but I don't think I can use hg bisect --command manage.py test mytestapp because mytestapp would have to be enabled in settings.py, and the edits to settings.py would be clobbered when hg bisect updates the working directory. Therefore, I would like to know if something like the following is the best way to go: import functools, os, sys, unittest sys.path.append(path_to_myproject) os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings' def with_test_db(func): """Decorator to setup and teardown test db.""" @functools.wraps def wrapper(*args, **kwargs): try: # Set up temporary django db func(*args, **kwargs) finally: # Tear down temporary django db class TestCase(unittest.TestCase): @with_test_db def test(self): # Do some tests using the temporary django db self.fail('Mark this revision as bad.') if '__main__' == __name__: unittest.main() I should be most grateful if you could advise either: If there is a simpler way, perhaps subclassing django.test.TestCase but not editing settings.py or, if not; What the lines above that say "Set up temporary django db" and "Tear down temporary django db" should be?

    Read the article

  • Celery / Django Single Tasks being run multiple times

    - by felix001
    I'm facing an issue where I'm placing a task into the queue and it is being run multiple times. From the celery logs I can see that the same worker is running the task ... [2014-06-06 15:12:20,731: INFO/MainProcess] Received task: input.tasks.add_queue [2014-06-06 15:12:20,750: INFO/Worker-2] starting runner.. [2014-06-06 15:12:20,759: INFO/Worker-2] collection started [2014-06-06 15:13:32,828: INFO/Worker-2] collection complete [2014-06-06 15:13:32,836: INFO/Worker-2] generation of steps complete [2014-06-06 15:13:32,836: INFO/Worker-2] update created [2014-06-06 15:13:33,655: INFO/Worker-2] email sent [2014-06-06 15:13:33,656: INFO/Worker-2] update created [2014-06-06 15:13:34,420: INFO/Worker-2] email sent [2014-06-06 15:13:34,421: INFO/Worker-2] FINISH - Success However when I view the actual logs of the application it is showing 5-6 log lines for each step (??). Im using Django 1.6 with RabbitMQ. The method for placing into the queue is via placing a delay on a function. This function (task decorator is added( then calls a class which is run. Has anyone any idea on the best way to troubleshoot this ? Edit : As requested heres the code, views.py In my view im sending my data to the queue via ... from input.tasks import add_queue_project add_queue_project.delay(data) tasks.py from celery.decorators import task @task() def add_queue_project(data): """ run project """ logger = logging_setup(app="project") logger.info("starting project runner..") f = project_runner(data) f.main() class project_runner(): """ main project runner """ def __init__(self,data): self.data = data self.logger = logging_setup(app="project") def self.main(self): .... Code settings.py THIRD_PARTY_APPS = ( 'south', # Database migration helpers: 'crispy_forms', # Form layouts 'rest_framework', 'djcelery', ) import djcelery djcelery.setup_loader() BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 # default RabbitMQ listening port BROKER_USER = "test" BROKER_PASSWORD = "test" BROKER_VHOST = "test" CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ CELERY_RESULT_DBURI = "" CELERY_IMPORTS = ("input.tasks", ) celeryd The line im running is to start celery, python2.7 manage.py celeryd -l info Thanks,

    Read the article

  • How to execute python script on the BaseHTTPSERVER created by python?

    - by user1731699
    I have simply created a python server with : python -m SimpleHTTPServer I had a .htaccess (I don't know if it is usefull with python server) with: AddHandler cgi-script .py Options +ExecCGI Now I am writing a simple python script : #!/usr/bin/python import cgitb cgitb.enable() print 'Content-type: text/html' print ''' <html> <head> <title>My website</title> </head> <body> <p>Here I am</p> </body> </html> ''' I make test.py (name of my script) an executed file with: chmod +x test.py I am launching in firefox with this addres: (http : //) 0.0.0.0:8000/test.py Problem, the script is not executed... I see the code in the web page... And server error is: localhost - - [25/Oct/2012 10:47:12] "GET / HTTP/1.1" 200 - localhost - - [25/Oct/2012 10:47:13] code 404, message File not found localhost - - [25/Oct/2012 10:47:13] "GET /favicon.ico HTTP/1.1" 404 - How can I manage the execution of python code simply? Is it possible to write in a python server to execute the python script like with something like that: import BaseHTTPServer import CGIHTTPServer httpd = BaseHTTPServer.HTTPServer(\ ('localhost', 8123), \ CGIHTTPServer.CGIHTTPRequestHandler) ###  here some code to say, hey please execute python script on the webserver... ;-) httpd.serve_forever() Or something else...

    Read the article

  • How to debug python del self.callbacks[s][cid] keyError when the error message does not indicate where in my code the error is

    - by lkloh
    In a python program I am writing, I get an error saying Traceback (most recent call last): File "/Applications/Canopy.app/appdata/canopy-1.4.0.1938.macosx- x86_64/Canopy.app/Contents/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__ return self.func(*args) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 413, in button_release_event FigureCanvasBase.button_release_event(self, x, y, num, guiEvent=event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 1808, in button_release_event self.callbacks.process(s, event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/cbook.py", line 525, in process del self.callbacks[s][cid] KeyError: 103 Do you have any idea how I can debug this/ what could be wrong? The error message does not point to anywhere in code I have personally written. I get the error message only after I close my GUI window, but I want to fix it even though it does not break the functionality of my code. The error is part of a very big program I am writing, so I cannot post all my code, but below is code I think is relevant: def save(self, event): self.getSaveAxes() self.save_connect() def getSaveAxes(self): saveFigure = figure(figsize=(8,1)) saveFigure.clf() # size of save buttons rect_saveHeaders = [0.04,0.2,0.2,0.6] rect_saveHeadersFilterParams = [0.28,0.2,0.2,0.6] rect_saveHeadersOverride = [0.52,0.2,0.2,0.6] rect_saveQuit = [0.76,0.2,0.2,0.6] #initalize axes saveAxs = {} saveAxs['saveHeaders'] = saveFigure.add_axes(rect_saveHeaders) saveAxs['saveHeadersFilterParams'] = saveFigure.add_axes(rect_saveHeadersFilterParams) saveAxs['saveHeadersOverride'] = saveFigure.add_axes(rect_saveHeadersOverride) saveAxs['saveQuit'] = saveFigure.add_axes(rect_saveQuit) self.saveAxs = saveAxs self.save_connect() self.saveFigure = saveFigure show() def save_connect(self): #set buttons self.bn_saveHeaders = Button(self.saveAxs['saveHeaders'], 'Save\nHeaders\nOnly') self.bn_saveHeadersFilterParams = Button(self.saveAxs['saveHeadersFilterParams'], 'Save Headers &\n Filter Parameters') self.bn_saveHeadersOverride = Button(self.saveAxs['saveHeadersOverride'], 'Save Headers &\nOverride Data') self.bn_saveQuit = Button(self.saveAxs['saveQuit'], 'Quit') #connect buttons to functions they trigger self.cid_saveHeaders = self.bn_saveHeaders.on_clicked(self.save_headers) self.cid_savedHeadersFilterParams = self.bn_saveHeadersFilterParams.on_clicked(self.save_headers_filterParams) self.cid_saveHeadersOverride = self.bn_saveHeadersOverride.on_clicked(self.save_headers_override) self.cid_saveQuit = self.bn_saveQuit.on_clicked(self.save_quit) def save_quit(self, event): self.save_disconnect() close()

    Read the article

  • Need help with Django tutorial

    - by Nai
    I'm doing the Django tutorial here: http://docs.djangoproject.com/en/1.2/intro/tutorial03/ My TEMPLATE_DIRS in the settings.py looks like this: TEMPLATE_DIRS = ( "/webapp2/templates/" "/webapp2/templates/polls" # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) My urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^polls/$', 'polls.views.index'), (r'^polls/(?P<poll_id>\d+)/$', 'polls.views.detail'), (r'^polls/(?P<poll_id>\d+)/results/$', 'polls.views.results'), (r'^polls/(?P<poll_id>\d+)/vote/$', 'polls.views.vote'), (r'^admin/', include(admin.site.urls)), ) My views.py looks like this: from django.template import Context, loader from polls.models import Poll from django.http import HttpResponse def index(request): latest_poll_list = Poll.objects.all().order_by('-pub_date')[:5] t = loader.get_template('c:/webapp2/templates/polls/index.html') c = Context({ 'latest_poll_list': latest_poll_list, }) return HttpResponse(t.render(c)) I think I am getting the path of my template wrong because when I simplify the views.py code to something like this, I am able to load the page. from django.http import HttpResponse def index(request): return HttpResponse("Hello, world. You're at the poll index.") My index template file is located at C:/webapp2/templates/polls/index.html. What am I doing wrong?

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • How can I resolve Hibernate 3's ConstraintViolationException when updating a Persistent Entity's Col

    - by Tim Visher
    I'm trying to discover why two nearly identical class sets are behaving different from Hibernate 3's perspective. I'm fairly new to Hibernate in general and I'm hoping I'm missing something fairly obvious about the mappings or timing issues or something along those lines but I spent the whole day yesterday staring at the two sets and any differences that would lead to one being able to be persisted and the other not completely escaped me. I appologize in advance for the length of this question but it all hinges around some pretty specific implementation details. I have the following class mapped with Annotations and managed by Hibernate 3.? (if the specific specific version turns out to be pertinent, I'll figure out what it is). Java version is 1.6. ... @Embeddable public class JobStateChange implements Comparable<JobStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = JobState.FIELD_LENGTH) private JobState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; public JobStateChange() { } @Override public int compareTo(final JobStateChange o) { return this.date.compareTo(o.date); } @Override public boolean equals(final Object obj) { if (this == obj) { return true; } else if (!(obj instanceof JobStateChange)) { return false; } JobStateChange candidate = (JobStateChange) obj; return this.state == candidate.state && this.actingUser.equals(candidate.getUser()) && this.date.equals(candidate.getDate()); } @Override public int hashCode() { return this.state.hashCode() + this.actingUser.hashCode() + this.date.hashCode(); } } It is mapped as a Hibernate CollectionOfElements in the class Job as follows: ... @Entity @Table( name = "job", uniqueConstraints = { @UniqueConstraint( columnNames = { "agency", //Job Name "payment_type", //Job Name "payment_file", //Job Name "date_of_payment", "payment_control_number", "truck_number" }) }) public class Job implements Serializable { private static final long serialVersionUID = -1131729422634638834L; ... @org.hibernate.annotations.CollectionOfElements @JoinTable(name = "job_state", joinColumns = @JoinColumn(name = "job_id")) @Sort(type = SortType.NATURAL) private final SortedSet<JobStateChange> stateChanges = new TreeSet<JobStateChange>(); ... public void advanceState( final User actor, final Date date) { JobState nextState; LOGGER.debug("Current state of {} is {}.", this, this.getCurrentState()); if (null == this.currentState) { nextState = JobState.BEGINNING; } else { if (!this.isAdvanceable()) { throw new IllegalAdvancementException(this.currentState.illegalAdvancementStateMessage); } if (this.currentState.isDivergent()) { nextState = this.currentState.getNextState(this); } else { nextState = this.currentState.getNextState(); } } JobStateChange stateChange = new JobStateChange(nextState, actor, date); this.setCurrentState(stateChange.getState()); this.stateChanges.add(stateChange); LOGGER.debug("Advanced {} to {}", this, this.getCurrentState()); } private void setCurrentState(final JobState jobState) { this.currentState = jobState; } boolean isAdvanceable() { return this.getCurrentState().isAdvanceable(this); } ... @Override public boolean equals(final Object obj) { if (obj == this) { return true; } else if (!(obj instanceof Job)) { return false; } Job otherJob = (Job) obj; return this.getName().equals(otherJob.getName()) && this.getDateOfPayment().equals(otherJob.getDateOfPayment()) && this.getPaymentControlNumber().equals(otherJob.getPaymentControlNumber()) && this.getTruckNumber().equals(otherJob.getTruckNumber()); } @Override public int hashCode() { return this.getName().hashCode() + this.getDateOfPayment().hashCode() + this.getPaymentControlNumber().hashCode() + this.getTruckNumber().hashCode(); } ... } The purpose of JobStateChange is to record when the Job moves through a series of State Changes that are outline in JobState as enums which know about advancement and decrement rules. The interface used to advance Jobs through a series of states is to call Job.advanceState() with a Date and a User. If the Job is advanceable according to rules coded in the enum, then a new StateChange is added to the SortedSet and everyone's happy. If not, an IllegalAdvancementException is thrown. The DDL this generates is as follows: ... drop table job; drop table job_state; ... create table job ( id bigint generated by default as identity, current_state varchar(25), date_of_payment date not null, beginningCheckNumber varchar(8) not null, item_count integer, agency varchar(10) not null, payment_file varchar(25) not null, payment_type varchar(25) not null, endingCheckNumber varchar(8) not null, payment_control_number varchar(4) not null, truck_number varchar(255) not null, wrapping_system_type varchar(15) not null, printer_id bigint, primary key (id), unique (agency, payment_type, payment_file, date_of_payment, payment_control_number, truck_number) ); create table job_state ( job_id bigint not null, acting_user_id bigint not null, date timestamp not null, state varchar(25) not null, primary key (job_id, acting_user_id, date, state) ); ... alter table job add constraint FK19BBD12FB9D70 foreign key (printer_id) references printer; alter table job_state add constraint FK57C2418FED1F0D21 foreign key (acting_user_id) references app_user; alter table job_state add constraint FK57C2418FABE090B3 foreign key (job_id) references job; ... The database is seeded with the following data prior to running tests ... insert into job (id, agency, payment_type, payment_file, payment_control_number, date_of_payment, beginningCheckNumber, endingCheckNumber, item_count, current_state, printer_id, wrapping_system_type, truck_number) values (-3, 'RRB', 'Monthly', 'Monthly','4501','1998-12-01 08:31:16' , '00000001','00040000', 40000, 'UNASSIGNED', null, 'KERN', '02'); insert into job_state (job_id, acting_user_id, date, state) values (-3, -1, '1998-11-30 08:31:17', 'UNASSIGNED'); ... After the database schema is automatically generated and rebuilt by the Hibernate tool. The following test runs fine up until the call to Session.flush() ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class JobDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private JobDao jobDao; @Autowired private SessionFactory sessionFactory; @Autowired private UserService userService; @Autowired private PrinterService printerService; ... @Test public void saveJob_JobAdvancedToAssigned_AllExpectedStateChanges() { //Get an unassigned Job Job job = this.jobDao.getJob(-3L); assertEquals(JobState.UNASSIGNED, job.getCurrentState()); Date advancedToUnassigned = new GregorianCalendar(1998, 10, 30, 8, 31, 17).getTime(); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); //Satisfy advancement constraints and advance job.setPrinter(this.printerService.getPrinter(-1L)); Date advancedToAssigned = new Date(); job.advanceState( this.userService.getUserByUsername("admin"), advancedToAssigned); assertEquals(JobState.ASSIGNED, job.getCurrentState()); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); assertEquals(advancedToAssigned, job.getStateChange(JobState.ASSIGNED).getDate()); //Persist to DB this.sessionFactory.getCurrentSession().flush(); ... } ... } The error thrown is SQLCODE=-803, SQLSTATE=23505: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] org.hibernate.exception.ConstraintViolationException: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1416) at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:170) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at jaci.dao.JobDaoIntegrationTest.saveJob_JobAdvancedToAssigned_AllExpectedStateChanges(JobDaoIntegrationTest.java:98) at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160) at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233) at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333) at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217) at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197) at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97) Caused by: com.ibm.db2.jcc.b.lm: DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=1;ACI_APP.JOB_STATE, DRIVER=3.50.152 at com.ibm.db2.jcc.b.wc.a(wc.java:575) at com.ibm.db2.jcc.b.wc.a(wc.java:57) at com.ibm.db2.jcc.b.wc.a(wc.java:126) at com.ibm.db2.jcc.b.tk.b(tk.java:1593) at com.ibm.db2.jcc.b.tk.c(tk.java:1576) at com.ibm.db2.jcc.t4.db.k(db.java:353) at com.ibm.db2.jcc.t4.db.a(db.java:59) at com.ibm.db2.jcc.t4.t.a(t.java:50) at com.ibm.db2.jcc.t4.tb.b(tb.java:200) at com.ibm.db2.jcc.b.uk.Gb(uk.java:2355) at com.ibm.db2.jcc.b.uk.e(uk.java:3129) at com.ibm.db2.jcc.b.uk.zb(uk.java:568) at com.ibm.db2.jcc.b.uk.executeUpdate(uk.java:551) at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1389) Therein lies my problem… A nearly identical Class set (in fact, so identical that I've been chomping at the bit to make it a single class that serves both business entities) runs absolutely fine. It is identical except for name. Instead of Job it's Web. Instead of JobStateChange it's WebStateChange. Instead of JobState it's WebState. Both Job and Web's SortedSet of StateChanges are mapped as a Hibernate CollectionOfElements. Both are @Embeddable. Both are SortType.Natural. Both are backed by an Enumeration with some advancement rules in it. And yet when a nearly identical test is run for Web, no issue is discovered and the data flushes fine. For the sake of brevity I won't include all of the Web classes here, but I will include the test and if anyone wants to see the actual sources, I'll include them (just leave a comment). The data seed: insert into web (id, stock_type, pallet, pallet_id, date_received, first_icn, last_icn, shipment_id, current_state) values (-1, 'PF', '0011', 'A', '2008-12-31 08:30:02', '000000001', '000080000', -1, 'UNSTAGED'); insert into web_state (web_id, date, state, acting_user_id) values (-1, '2008-12-31 08:30:03', 'UNSTAGED', -1); The test: ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class WebDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private WebDao webDao; @Autowired private UserService userService; @Autowired private SessionFactory sessionFactory; ... @Test public void saveWeb_WebAdvancedToNewState_AllExpectedStateChanges() { Web web = this.webDao.getWeb(-1L); Date advancedToUnstaged = new GregorianCalendar(2008, 11, 31, 8, 30, 3).getTime(); assertEquals(WebState.UNSTAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); Date advancedToStaged = new Date(); web.advanceState( this.userService.getUserByUsername("admin"), advancedToStaged); this.sessionFactory.getCurrentSession().flush(); web = this.webDao.getWeb(web.getId()); assertEquals( "Web should have moved to STAGED State.", WebState.STAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); assertEquals(advancedToStaged, web.getState(WebState.STAGED).getDate()); assertNotNull(web.getState(WebState.UNSTAGED)); assertNotNull(web.getState(WebState.STAGED)); } ... } As you can see, I assert that the Web was reconstituted the way I expect, I advance it, flush it to the DB, and then re-get it and verify that the states are as I expect. Everything works perfectly. Not so with Job. A possibly pertinent detail: the reconstitution code works fine if I cease to map JobStateChange.data as a TIMESTAMP and instead as a DATE, and ensure that all of the StateChanges always occur on different Dates. The problem is that this particular business entity can go through many state changes in a single day and so it needs to be sorted by time stamp rather than by date. If I don't do this then I can't sort the StateChanges correctly. That being said, WebStateChange.date is also mapped as a TIMESTAMP and so I again remain absolutely befuddled as to where this error is arising from. I tried to do a fairly thorough job of giving all of the technical details of the implementation but as this particular question is very implementation specific, if I missed anything just let me know in the comments and I'll include it. Thanks so much for your help! UPDATE: Since it turns out to be important to the solution of my problem, I have to include the pertinent bits of the WebStateChange class as well. ... @Embeddable public class WebStateChange implements Comparable<WebStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = WebState.FIELD_LENGTH) private WebState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; ... WebStateChange( final WebState state, final User actingUser, final Date date) { ExceptionUtils.illegalNullArgs(state, actingUser, date); this.state = state; this.actingUser = actingUser; this.date = new Date(date.getTime()); } @Override public int compareTo(final WebStateChange otherStateChange) { return this.date.compareTo(otherStateChange.date); } @Override public boolean equals(final Object candidate) { if (this == candidate) { return true; } else if (!(candidate instanceof WebStateChange)) { return false; } WebStateChange candidateWebState = (WebStateChange) candidate; return this.getState() == candidateWebState.getState() && this.getUser().equals(candidateWebState.getUser()) && this.getDate().equals(candidateWebState.getDate()); } @Override public int hashCode() { return this.getState().hashCode() + this.getUser().hashCode() + this.getDate().hashCode(); } ... }

    Read the article

  • Hard Disk DRDY error: is it a crash

    - by pranjal
    I am using IBM Thinkpad, 1.7GHz, 512 RAM with Linux Mint 9 installed. I have two partitions in addition to root. One of the partitions became read-only yesterday, after which I rebooted my system. It is extremely slow along with DRDY Error : Is my Hard disk crashed ? Error Log while booting. Differences between boot sector and its backup. failed command : READ DMA BMDMA : stat 0X25 ata 1.00 : status : { DRDY ERR } ata 1.00 : status :{ UNC } Buffer I/O error on logical device, logical block 65467 smartctl output for the partition: mint mint # smartctl -a /dev/sda1 smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: TOSHIBA MK4026GAX RoHS Serial Number: X5LY1623T Firmware Version: PA107E User Capacity: 40,007,761,920 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 6 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Feb 17 06:48:25 2011 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 153) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 30) minutes. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 310 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 3968 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 40 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 082 082 000 Old_age Always - 7257 10 Spin_Retry_Count 0x0033 179 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 3484 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 489 193 Load_Cycle_Count 0x0032 064 064 000 Old_age Always - 367150 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 36 (Lifetime Min/Max 14/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 33 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 82 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 101 222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6146 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 227 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 ATA Error Count: 2371 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2371 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:10.061 READ DMA f8 00 00 00 00 00 e0 00 00:03:10.061 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:10.053 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:10.053 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:10.053 READ NATIVE MAX ADDRESS Error 2370 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:03.328 READ DMA f8 00 00 00 00 00 e0 00 00:03:03.327 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:03.320 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:03.319 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:03.319 READ NATIVE MAX ADDRESS Error 2369 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:56.582 READ DMA f8 00 00 00 00 00 e0 00 00:02:56.582 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:56.574 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:56.574 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:56.574 READ NATIVE MAX ADDRESS Error 2368 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:49.809 READ DMA f8 00 00 00 00 00 e0 00 00:02:49.809 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:49.801 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:49.801 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:49.801 READ NATIVE MAX ADDRESS Error 2367 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:43.056 READ DMA f8 00 00 00 00 00 e0 00 00:02:43.056 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:43.048 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:43.048 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:43.047 READ NATIVE MAX ADDRESS SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Device does not support Selective Self Tests/Logging Do I need to get a new Hard Disk my PC ?

    Read the article

  • Is this right in the use case of exec method of child_process? is there away to cody the envirorment along with the require module too?

    - by L2L2L
    I'm learning node. I am using child_process to move data to another script to be executed. But it seem that it does not copy the hold environment or I could be doing something wrong. To copy the hold environment --require modules too-- or is this when I use spawn, I'm not so clear or understanding spawn exec and execfile --although execfile is like what I'm doing at the bottom, but with exec... right?-- And I would just love to have some clarity on this matter. Please anyone? Thank you. parent.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var url; url= process.argv[1]; var dirname, locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); var flag, str; flag = "r", str = ""; fs.open(locate_r, flag, function opd(error, fd){ if (error){_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n");});})} var readBuff, buffOffset, buffLength, filePos; readBuff = new Buffer(15), buffOffset = 0, buffLength = readBuff.length, filePos = 0; fs.read(fd, readBuff, buffOffset, buffLength, filePos, function rd(error, readBytes){ error&&_err(error, fd); str = readBuff.toString("utf8"); process.env.str = str; process.stdout.write("str: "+ str + "\n" + "readBuff: " + readBuff + "\n"); fs.close(fd, function(){process.stdout.write( "Read and Closed File." + "\n" )}); //write(str); //run test for process.exec** var env, varName, envCopy, exec; env = process.env, varName, envCopy = {}, exec = require("child_process").exec; for(varName in env){ envCopy[varName] = env[varName]; } process.env.fs = fs, process.env.path = path, process.env.dirname = dirname, process.env.flag = flag, process.env.str = str, process.env._err = _err; process.env.fd = fd; exec("node child.js", env, function(error, stdout, stderr){ if(error){throw (new Error(error));} }); }); }); child.js - "use strict"; var fs, path, _err; fs = require("fs"), path = require("path"), _err = require("./err.js"); var fd, fs, flag, path, dirname, str, _err; fd = process.env.fd, //fs = process.env.fs, //path = process.env.path, dirname = process.env.dirname, flag = process.env.flag, str = process.env.str, _err = process.env._err; var url; url= process.argv[1]; var locate_r; dirname = path.dirname(url); locate_r = dirname + "/" + "test.json";//path.join(dirname,"/", "test.json"); //function write(str){ var locate_a; locate_a = dirname + "/" + "test.json"; //path.join(dirname,"/", "test.json"); flag = "a"; fs.open(locate_a, flag, function opd(error, fd){ error&&_err(error, fs, fd); var writeBuff, buffPos, buffLgh, filePs; writeBuff = new Buffer(str), process.stdout.write( "writeBuff: " + writeBuff + "\n" + "str: " + str + "\n"), buffPos = 0, buffLgh = writeBuff.length, filePs = buffLgh;//null; fs.write(fd, writeBuff, buffPos, buffLgh, filePs-3, function(error, written){ error&&_err(error, function(){ fs.close(fd,function(){ process.stderr.write("\n" + "In Finally Block: File Closed!" + "\n"); }); }); fs.close(fd, function(){process.stdout.write( "Written and Closed File." + "\n");}); }); }); //} err.js - "use strict"; var fs; fs = require("fs"); module.exports = function _err(err, scp, cd){ try{ throw (new Error(err)); }catch(e){ process.stderr.write(e + "\n"); }finally{ cd; } }

    Read the article

  • There is no web named - Sharepoint Event Hander

    - by Roosh Malai
    I activated following code with feature (web level scope). Now when i add an item to any document library it should create a folder "". No folder is created and no error is given either. can anyone see what's is going on? I got the following from the log file. I found similar code all over google so I am kinda puzzled why is not working in my environment. Thanks using System; using System.Collections.Generic; using System.Text; using Microsoft.SharePoint; namespace AddaFolder { class clAddaFolder : SPItemEventReceiver { public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); using (SPSite currentSite = new SPSite(SPContext.Current.Site.Url)) using (SPWeb currentWeb = currentSite.OpenWeb(SPContext.Current.Web.Url)) { try { //SPListTemplateCollection coll = currentWeb.ListTemplates; //Get the current document library link SPList newList = currentWeb.GetList(SPContext.Current.Web.Url); //.Site.Url); //newList = currentWeb.Lists.Add("My TEST Folder",SPFileSystemObjectType.Folder); //newList.Lists.Items.Add("My TEST Folder", SPFileSystemObjectType.Folder); //newList.Update(); SPListItem newListItem; //newListItem = newList.Folders.Add("", SPFileSystemObjectType.Folder, "My Test Folder"); newListItem = newList.Folders.Add(newList.ToString(), SPFileSystemObjectType.Folder, "My Test Folder"); newListItem.Update(); } catch (SPException spEx) { throw spEx; } } } } } 04/03/2010 17:52:44.25 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:52:44.26 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:52:44.27 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:52:44.29 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:52:44.30 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx". 04/03/2010 17:52:44.31 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:52:44.32 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:52:44.34 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:52:44.35 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:52:44.36 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx". 04/03/2010 17:52:51.33 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:52:51.34 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:52:51.35 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:52:51.37 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:52:51.38 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx". 04/03/2010 17:52:51.39 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:52:51.40 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:52:51.41 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:52:51.43 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:52:51.44 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx". 04/03/2010 17:53:02.69 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:53:02.71 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:53:02.72 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:53:02.73 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:53:02.74 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx". 04/03/2010 17:53:02.75 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Shared Documents/Forms/AllItems.aspx". 04/03/2010 17:53:02.76 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/My TEST Doc Library/Forms/AllItems.aspx". 04/03/2010 17:53:02.77 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Calendar/calendar.aspx". 04/03/2010 17:53:02.78 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Tasks/AllItems.aspx". 04/03/2010 17:53:02.79 w3wp.exe (0x00C0) 0x0C88 Windows SharePoint Services General 8kh7 High There is no Web named "/sites/myDevSiteColl/myDevWeb/Lists/Team Discussion/AllItems.aspx".

    Read the article

  • Unit testing methods decorated with custom attributes

    - by Joel Cunningham
    I am trying to retrofit unit tests on to some existing code base. Both the class and method I want to unit test is decorated with custom attributes. These attributes are fairly sophisticated and I dont want them to run as part of the unit test. The only solution I have come up with is to compile the attribute out when I want to unit test. I dont really like this solution and would prefer to either replace it with a mocked attribute at runtime or prevent the attribute from running in a more elegant way. How do you unit test code that has class and method attributes that you dont want to run as part of a unit test? Thanks in advance.

    Read the article

  • TFS 2010 RC does not run Visual Studio 2008 MSTest unit tests

    - by Bernard Vander Beken
    Steps: Run the build including unit tests. Expected result: the unit tests are executed and succeed. Actual result: the unit tests are built by the build, but this is the result: 1 test run(s) completed - 0% average pass rate (0% total pass rate) 0/4 test(s) passed, 0 failed, 4 inconclusive, View Test Results Other Errors and Warnings 1 error(s), 0 warning(s) TF270015: 'MSTest.exe' returned an unexpected exit code. Expected '0'; actual '1'. All the tests are enumerated (four), but the result for each test is "Not Executed". Context: Team Foundation Server 2010 release candidate A build definition that runs projects using the Visual Studio 2008 project format and .NET 3.5 SP1. The unit tests run on a development machine, within Visual Studio. The unit tests project references C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll Typical test class [TestClass] public class DemoTest { [TestMethod] public void DemoTestName() { } // etc }

    Read the article

  • NoSuchMethodError with Spring MutableValues

    - by jakob
    Hello experts! I have written a test where i specify my application context location with annotations. I then autowire my dao into the test. @ContextConfiguration(locations = {"file:service/src/main/webapp/WEB-INF/applicationContext.xml"}) public class MyTest extends AbstractTestNGSpringContextTests { @Autowired protected MyDao myDao; private PlatformTransactionManager transactionManager; private TransactionTemplate transactionTemplate; @Test public void shouldSaveEntityToDb() { transactionTemplate.execute(new TransactionCallbackWithoutResult() { protected void doInTransactionWithoutResult(TransactionStatus status) { Entity entity = new Entity(); //test myDao.save(entity) //assert assertNotNull(entity.getId()); } }); } When i run the test i get an exception which states that the application context could not be loaded and it boils down to: Caused by: java.lang.NoSuchMethodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/String;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; I have no idea where to start looking, why do i get this error and how can i resolve it? Info springframework 3.0.2.RELEASE, Hibernate 3.4.0.GA, testng 5.9 Thank you!

    Read the article

  • JavaScript - string regex backreferences

    - by quano
    You can backreference like this in JavaScript: var str = "123 $test 123"; str = str.replace(/(\$)([a-z]+)/gi, "$2"); This would (quite silly) replace "$test" with "test". But imagine I'd like to pass the resulting string of $2 into a function, which returns another value. I tried doing this, but instead of getting the string "test", I get "$2". Is there a way to achieve this? // Instead of getting "$2" passed into somefunc, I want "test" // (i.e. the result of the regex) str = str.replace(/(\$)([a-z]+)/gi, somefunc("$2"));

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • TDD and WCF behavior

    - by Frederic Hautecoeur
    Some weeks ago I wanted to develop a WCF behavior using TDD. I have lost some time trying to use mocks. After a while i decided to just use a host and a client. I don’t like this approach but so far I haven’t found a good and fast solution to use Unit Test for testing a WCF behavior. To Implement my solution I had to : Create a Dummy Service Definition; Create the Dummy Service Implementation; Create a host; Create a client in my test; Create and Add the behavior; Dummy Service Definition This is just a simple service, composed of an Interface and a simple implementation. The structure is aimed to be easily customizable for my future needs.   Using Clauses : 1: using System.Runtime.Serialization; 2: using System.ServiceModel; 3: using System.ServiceModel.Channels; The DataContract: 1: [DataContract()] 2: public class MyMessage 3: { 4: [DataMember()] 5: public string MessageString; 6: } The request MessageContract: 1: [MessageContract()] 2: public class RequestMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myRequest; 9: } The response MessageContract: 1: [MessageContract()] 2: public class ResponseMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myResponse; 9: } The ServiceContract: 1: [ServiceContract(Name="DummyService", Namespace="http://dummyservice",SessionMode=SessionMode.Allowed )] 2: interface IDummyService 3: { 4: [OperationContract(Action="Perform", IsOneWay=false, ProtectionLevel=System.Net.Security.ProtectionLevel.None )] 5: ResponseMessage DoThis(RequestMessage request); 6: } Dummy Service Implementation 1: public class DummyService:IDummyService 2: { 3: #region IDummyService Members 4: public ResponseMessage DoThis(RequestMessage request) 5: { 6: ResponseMessage response = new ResponseMessage(); 7: response.myHeader = "Response"; 8: response.myResponse = new MyMessage(); 9: response.myResponse.MessageString = 10: string.Format("Header:<{0}> and Request was <{1}>", 11: request.myHeader, request.myRequest.MessageString); 12: return response; 13: } 14: #endregion 15: } Host Creation The most simple host implementation using a Named Pipe binding. The GetBinding method will create a binding for the host and can be used to create the same binding for the client. 1: public static class TestHost 2: { 3: 4: internal static string hostUri = "net.pipe://localhost/dummy"; 5:  6: // Create Host method. 7: internal static ServiceHost CreateHost() 8: { 9: ServiceHost host = new ServiceHost(typeof(DummyService)); 10:  11: // Creating Endpoint 12: Uri namedPipeAddress = new Uri(hostUri); 13: host.AddServiceEndpoint(typeof(IDummyService), GetBinding(), namedPipeAddress); 14:  15: return host; 16: } 17:  18: // Binding Creation method. 19: internal static Binding GetBinding() 20: { 21: NamedPipeTransportBindingElement namedPipeTransport = new NamedPipeTransportBindingElement(); 22: TextMessageEncodingBindingElement textEncoding = new TextMessageEncodingBindingElement(); 23:  24: return new CustomBinding(textEncoding, namedPipeTransport); 25: } 26:  27: // Close Method. 28: internal static void Close(ServiceHost host) 29: { 30: if (null != host) 31: { 32: host.Close(); 33: host = null; 34: } 35: } 36: } Checking the service A simple test tool check the plumbing. 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Open(); 7:  8: using (ChannelFactory<IDummyService> channel = 9: new ChannelFactory<IDummyService>(TestHost.GetBinding() 10: , new EndpointAddress(TestHost.hostUri))) 11: { 12: IDummyService svc = channel.CreateChannel(); 13: try 14: { 15: RequestMessage request = new RequestMessage(); 16: request.myHeader = Guid.NewGuid().ToString(); 17: request.myRequest = new MyMessage(); 18: request.myRequest.MessageString = "I want some beer."; 19:  20: ResponseMessage response = svc.DoThis(request); 21: } 22: catch (Exception ex) 23: { 24: Assert.Fail(ex.Message); 25: } 26: } 27: host.Close(); 28: } 29: } Running the service should show that the client and the host are running fine. So far so good. Adding the Behavior Add a reference to the Behavior project and add the using entry in the test class. We just need to add the behavior to the service host : 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Description.Behaviors.Add(new MyBehavior()); 7: host.Open();¨ 8: …  If you set a breakpoint in your behavior and run the test in debug mode, you will hit the breakpoint. In this case I used a ServiceBehavior. To add an Endpoint behavior you have to add it to the endpoints. 1: host.Description.Endpoints[0].Behaviors.Add(new MyEndpointBehavior()) To add a contract or an operation behavior a custom attribute should work on the service contract definition. I haven’t tried that yet.   All the code provided in this blog and in the following files are for sample use. Improvements I don’t like to instantiate a client and a service to test my behaviors. But so far I have' not found an easy way to do it. Today I am passing a type of endpoint to the host creator and it creates the right binding type. This allows me to easily switch between bindings at will. I have used the same approach to test Mex Endpoints, another post should come later for this. Enjoy !

    Read the article

  • C# "Enum" Serialization - Deserialization to Static Instance

    - by Walt W
    Suppose you have the following class: class Test : ISerializable { public static Test Instance1 = new Test { Value1 = "Hello" ,Value2 = 86 }; public static Test Instance2 = new Test { Value1 = "World" ,Value2 = 26 }; public String Value1 { get; private set; } public int Value2 { get; private set; } public void GetObjectData(SerializationInfo info, StreamingContext context) { //Serialize an indicator of which instance we are - Currently //I am using the FieldInfo for the static reference. } } I was wondering if it is possible / elegant to deserialize to the static instances of the class? Since the deserialization routines (I'm using BinaryFormatter, though I'd imagine others would be similar) look for a constructor with the same argument list as GetObjectData(), it seems like this can't be done directly . . Which I would presume means that the most elegant solution would be to actually use an enum, and then provide some sort of translation mechanism for turning an enum value into an instance reference. How might one go about this?

    Read the article

  • wxWidgets: How to initialize wxApp without using macros and without entering the main application l

    - by m_pGladiator
    We need to write unit tests for a wxWidgets application using Google Test Framework. The problem is that wxWidgets uses the macro IMPLEMENT_APP(MyApp) to initialize and enter the application main loop. This macro creates several functions including int main(). The google test framework also uses macro definitions for each test. One of the problems is that it is not possible to call the wxWidgets macro from within the test macro, because the first one creates functions.. So, we found that we could replace the macro with the following code: wxApp* pApp = new MyApp(); wxApp::SetInstance(pApp); wxEntry(argc, argv); That's a good replacement, but wxEntry() call enters the original application loop. If we don't call wxEntry() there are still some parts of the application not initialized. The question is how to initialize everything required for a wxApp to run, without actually running it, so we are able to unit test portions of it?

    Read the article

  • Universal iPhone/iPad application debug compilation error for iPhone testing

    - by andybee
    I have written an iPhone and iPad universal app which runs fine in the iPad simulator on Xcode, but I would now like to test the iPhone functionality. I seem unable to run the iPhone simulator with this code as it always defaults to the iPad? Instead I tried to run on the device and as it begins to run I get the following error: dyld: Symbol not found: _OBJC_CLASS_$_UISplitViewController Referenced from: /var/mobile/Applications/9770ACFA-0B88-41D4-AF56-77B66B324640/Test.app/Test Expected in: /System/Library/Frameworks/UIKit.framework/UIKit in /var/mobile/Applications/9770ACFA-0B88-41D4-AF56-77B66B324640/Test.app/TEST As the App is built programmatically rather than using XIB's, I've split the 2 device logics using the following lines in the main.m method: if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { retVal = UIApplicationMain(argc, argv, nil, @"AppDelegate_Pad"); } else { retVal = UIApplicationMain(argc, argv, nil, @"AppDelegate_Phone"); } From that point forth they use different AppDelegates and I've checked my headers to ensure the UISplitView is never used nor imported via the Phone logic. How do I avoid this error and is there a better way to split the universal logic paths in this programmatically-created app?

    Read the article

  • Scheme early "short circuit return"?

    - by Suan
    I'm trying to find out how I can do an "early return" in a scheme procedure without using a top-level if or cond like construct. (define (win b) (let* ((test (first (first b))) (result (every (lambda (i) (= (list-ref (list-ref b i) i) test)) (enumerate (length b))))) (when (and (not (= test 0)) result) test)) 0) For example, in the code above, I want win to return test if the when condition is met, otherwise return 0. However, what happens is that the procedure will always return 0, regardless of the result of the when condition. The reason I am structuring my code this way is because in this procedure I need to do numerous complex checks (multiple blocks similar to the let* in the example) and putting everything in a big cond would be very unwieldy.

    Read the article

  • Problem with constants in application.ini after PHP upgrade

    - by Marek
    Hi, I've upgraded PHP on my local dev system to version 5.3.0, and there is some problem when I use constants in application.ini - following manual http://framework.zend.com/manual/en/learning.quickstart.create-project.html I have: bootstrap.path = APPLICATION_PATH "/Bootstrap.php" which leads to: Warning: require_once(APPLICATION_PATH/Bootstrap.php) [function.require-once]: failed to open stream: No such file or directory in Zend\Application.php on line 320 any ideas? SOLVED: Actually name of my constant was _DIR_APPLICATION (code above was copied from ZF manual) - problem lies in this underscore at the begining - it seems that parse_ini_file() in PHP 5.3.0 doesn't replace constants named like this. Short test - you need two files: test.ini bootstrap.path = _DIR_APPLICATION "/Bootstrap.php" bootstrap.class = "Bootstrap" and test.php <?php define('_DIR_APPLICATION', 'test'); $data = parse_ini_file('test.ini'); print_r($data); try to run, then change constant name to 'DIR_APPLICATION' (in both files) and compare result ;)

    Read the article

  • OCUnit testing an embedded framework

    - by d11wtq
    I've added a Unit Test target to my Xcode project but it fails to find my framework when it builds, saying: Test.octest could not be loaded because a link error occurred. It is likely that dyld cannot locate a framework framework or library that the the test bundle was linked against, possibly because the framework or library had an incorrect install path at link time. My framework (the main project target) is designed to be embedded and so has an install path of @executable_path/../Frameworks. I've marked the framework as a direct dependency of the test target and I've added it to the "Link Binary with Libraries" build phase. Additionally I've add a first step (after it's built the dependency) of "Copy Files" which simply copies the framework to the unit test bundle's Frameworks directory. Anyone got any experience on this? I'm not sure what I've missed.

    Read the article

  • unittest tests reuse for family of classes

    - by zaharpopov
    I have problem organizing my unittest based class test for family of tests. For example assume I implement a "dictionary" interface, and have 5 different implementations want to testing. I do write one test class that tests a dictionary interface. But how can I nicely reuse it to test my all classes? So far I do ugly: DictType = hashtable.HashDict In top of file and then use DictType in test class. To test another class I manually change the DictType to something else. How can do this otherwise? Can't pass arguments to unittest classes so is there a nicer way?

    Read the article

  • Guessing UTF-8 encoding

    - by Dervin Thunk
    I have a question that may be quite naive, but I feel the need to ask, because I don't really know what is going on. I'm on Ubuntu. Suppose I do echo "t" > test.txt if I then file test.txt I get test.txt:ASCII text If I then do echo "å" > test.txt Then I get test.txt: UTF-8 Unicode text How does that happen? How does file "know" the encoding, or, alternatively, how does it guess it? Thanks.

    Read the article

  • fsutil hardlink doesn't work?

    - by Alix Axel
    I was looking for a way to create hard links under Windows and I found this page: http://technet.microsoft.com/en-us/library/cc788097.aspx To try it out, I created a file (1.txt) on the root of my C: drive with 100 lines of the following content: C:\1.txt (2.598 bytes): test test test test test Then I open the command prompt and type: fsutil hardlink create C:\2.txt C:\1.txt Success, 2.txt was created but when I go to see it's size it has exactly 2.598 bytes and also noticed some strange behaviours (as far as my understanding of hard links goes): If I delete 2.txt (the hard linked file) 1.txt is not deleted, and vice-versa. If I open 2.txt after I delete 1.txt (the original file) the content is still the same. How does the fsutil hardlink create command differs from the copy command? And how can I create a true hard link under Windows? I'm using Windows XP SP 3, and my file system is NTFS.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >