Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 64/79 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Running multiple applications in STM32 flash

    - by Richard
    Hey! I would like to have two applications in my STM32 flash, one is basically a boot and the other the 'main' application. I have figured out how to load each of them into different areas of flash, and after taking a memory dump everything looks like it is in the right place. So when I do a reset it loads the boot, all the boot does at the moment is jump to the application. Debugging the boot, this all appears to work correctly. However the problems arrives after i've made the jump to the application, it just executes one instruction (assembly) and then jumps back to the boot. It should stay in the application indefinitely. My question is then, where should I 'jump' to in the app? It seems that there are a few potential spots, such as the interrupt vectors, the reset handler, the main function of the app. Actually I've tried all of those with no success. Hopefully that makes sense, i'll update the question if not. thanks for your help! Richard Updates: I had a play around in the debugger and manually changed the program counter to the main of the application, and well that worked a charm, so it makes me think there is something wrong with my jump, why doesn't the program counter keep going after the jump? Actually it seems to be the PSR, the 'T' gets reset on the jump, if I set that again after the jump it continues on with the app as I desire Ok found a solution, seems that you need to have the PC LSB set to 1 when you do a branch or it falls into the 'ARM' mode (32 bit instruction instead of 16 bit instructions like in the 'thumb' mode. Quite an obscure little problem, thanks for letting me share it with you!

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • PHP OOP: Providing Domain Entities with "Identity"

    - by sunwukung
    Bit of an abstract problem here. I'm experimenting with the Domain Model pattern, and barring my other tussles with dependencies - I need some advice on generating Identity for use in an Identity Map. In most examples for the Data Mapper pattern I've seen (including the one outlined in this book: http://apress.com/book/view/9781590599099) - the user appears to manually set the identity for a given Domain Object using a setter: $UserMapper = new UserMapper; //returns a fully formed user object from record sets $User = $UserMapper->find(1); //returns an empty object with appropriate properties for completion $UserBlank = $UserMapper->get(); $UserBlank->setId(); $UserBlank->setOtherProperties(); Now, I don't know if I'm reading the examples wrong - but in the first $User object, the $id property is retrieved from the data store (I'm assuming $id represents a row id). In the latter case, however, how can you set the $id for an object if it has not yet acquired one from the data store? The problem is generating a valid "identity" for the object so that it can be maintained via an Identity Map - so generating an arbitrary integer doesn't solve it. My current thinking is to nominate different fields for identity (i.e. email) and demanding their presence in generating blank Domain Objects. Alternatively, demanding all objects be fully formed, and using all properties as their identity...hardly efficient. (Or alternatively, dump the Domain Model concept and return to DBAL/DAO/Transaction Scripts...which is seeming increasingly elegant compared to the ORM implementations I've seen...)

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • call lynx from jsp script

    - by Piero
    Hi, I have an execute(String cmd) in a jsp script that calls the exec method from the Runtime class. It works when I call a local command, like a php script stored on the server. for example: /usr/bin/php /path/to/php/script arg1 arg2 So I guess my execute command is ok, since it is working with that. Now when I try to call lynx, the text-based web browser, it does not work. If I call it in a terminal, it works fine: /usr/bin/lynx -dump -accept_all_cookies 'http://www.someurl.net/?arg1=1&arg2=2' But when I call this from my execute command, nothing happens... Any idea why? This is my execute method: public String execute(String cmd){ Runtime r = Runtime.getRuntime(); Process p = null; String res = ""; try { p = r.exec(cmd); InputStreamReader isr = new InputStreamReader(p.getInputStream()); BufferedReader br = new BufferedReader(isr); String line = null; //out.println(res); while ((line = br.readLine()) != null) { res += line; } p.waitFor(); } catch (Exception e) { res += e; } System.out.println(p.exitValue()); return res; }

    Read the article

  • malloc works, cudaHostAlloc segfaults?

    - by Mikhail
    I am new to CUDA and I want to use cudaHostAlloc. I was able to isolate my problem to this following code. Using malloc for host allocation works, using cudaHostAlloc results in a segfault, possibly because the area allocated is invalid? When I dump the pointer in both cases it is not null, so cudaHostAlloc returns something... works in_h = (int*) malloc(length*sizeof(int)); //works for (int i = 0;i<length;i++) in_h[i]=2; doesn't work cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); for (int i = 0;i<length;i++) in_h[i]=2; //segfaults Standalone Code #include <stdio.h> void checkDevice() { cudaDeviceProp info; int deviceName; cudaGetDevice(&deviceName); cudaGetDeviceProperties(&info,deviceName); if (!info.deviceOverlap) { printf("Compute device can't use streams and should be discared."); exit(EXIT_FAILURE); } } int main() { checkDevice(); int *in_h; const int length = 10000; cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); printf("segfault comming %d\n",in_h); for (int i = 0;i<length;i++) { in_h[i]=2; } free(in_h); return EXIT_SUCCESS; } ~ Invocation [id129]$ nvcc fun.cu [id129]$ ./a.out segfault comming 327641824 Segmentation fault (core dumped) Details Program is run in interactive mode on a cluster. I was told that an invocation of the program from the compute node pushes it to the cluster. Have not had any trouble with other home made toy cuda codes.

    Read the article

  • How do I find the module dependencies of my Perl script?

    - by zoul
    I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like: $ cpan -i `said-script --list-deps` Or even: $ list-deps said-script > required-modules # on my machine $ cpan -i `cat required-modules` # on his machine Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.) Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about @INC. I settled with something like this: print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC); Which prints out: Moose::Meta::TypeConstraint::Registry Moose::Meta::Role::Application::ToClass Class::C3 List::Util Imager::Color … Looks like this will work.

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • Pass Result of ASIHTTPRequest "requestFinished" Back to Originating Method

    - by Intelekshual
    I have a method (getAllTeams:) that initiates an HTTP request using the ASIHTTPRequest library. NSURL *httpURL = [[[NSURL alloc] initWithString:@"/api/teams" relativeToURL:webServiceURL] autorelease]; ASIHTTPRequest *request = [[[ASIHTTPRequest alloc] initWithURL:httpURL] autorelease]; [request setDelegate:self]; [request startAsynchronous]; What I'd like to be able to do is call [WebService getAllTeams] and have it return the results in an NSArray. At the moment, getAllTeams doesn't return anything because the HTTP response is evaluated in the requestFinished: method. Ideally I'd want to be able to call [WebService getAllTeams], wait for the response, and dump it into an NSArray. I don't want to create properties because this is disposable class (meaning it doesn't store any values, just retrieves values), and multiple methods are going to be using the same requestFinished (all of them returning an array). I've read up a bit on delegates, and NSNotifications, but I'm not sure if either of them are the best approach. I found this snippet about implementing callbacks by passing a selector as a parameter, but it didn't pan out (since requestFinished fires independently). Any suggestions? I'd appreciate even just to be pointed in the right direction. NSArray *teams = [[WebService alloc] getAllTeams]; (currently doesn't work, because getAllTeams doesn't return anything, but requestFinished does. I want to get the result of requestFinished and pass it back to getAllTeams:)

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Hardware acceleration issue with WPF application - Analyzing crash dumps

    - by Appu
    I have a WPF application crashing on the client machine. My initial analysis shows that this is because of H/W acceleration and disabling H/W acceleration at a registry level solves the issue. Now I have to make sure that this is caused because of H/W acceleration. I have the crash dumps available which has a stack trace, 9c52b020 80636ac7 9c52b0bc e1174818 9c52b0c0 nt!_SEH_prolog+0x1a 9c52b038 806379d6 e10378a4 9c52b0bc e1174818 nt!CmpQuerySecurityDescriptorInfo+0x23 9c52b084 805bfe5b e714b160 00000001 9c52b0bc nt!CmpSecurityMethod+0xce 9c52b0c4 805c01c8 e714b160 9c52b0f0 e714b15c nt!ObpGetObjectSecurity+0x99 9c52b0f4 8062f28f e714b160 8617f008 00000001 nt!ObCheckObjectAccess+0x2c 9c52b140 8062ff30 e1038008 0066a710 cde2b714 nt!CmpDoOpen+0x2d5 9c52b340 805bf488 0066a710 0066a710 8617f008 nt!CmpParseKey+0x5a6 9c52b3b8 805bba14 00000000 9c52b3f8 00000240 nt!ObpLookupObjectName+0x53c 9c52b40c 80625696 00000000 8acad448 00000000 nt!ObOpenObjectByName+0xea 9c52b508 8054167c 9c52b828 82000000 9c52b5ac nt!NtOpenKey+0x1c8 9c52b508 80500699 9c52b828 82000000 9c52b5ac nt!KiFastCallEntry+0xfc 9c52b58c 805e701e 9c52b828 82000000 9c52b5ac nt!ZwOpenKey+0x11 9c52b7fc 805e712a 00000002 805e70a0 00000000 nt!RtlpGetRegistryHandleAndPath+0x27a 9c52b844 805e73e3 9c52b864 00000014 9c52bbb8 nt!RtlpQueryRegistryGetBlockPolicy+0x2e 9c52b86c 805e79eb 00000003 e8af79dc 00000014 nt!RtlpQueryRegistryDirect+0x4b 9c52b8bc 805e7f10 e8af79dc 00000003 9c52b948 nt!RtlpCallQueryRegistryRoutine+0x369 9c52bb58 b8f5bca4 00000005 e6024b30 9c52bbb8 nt!RtlQueryRegistryValues+0x482 WARNING: Stack unwind information not available. Following frames may be wrong. 9c52bc00 b8f20a5b 00000005 85f4204c 85f4214c igxpmp32+0x44ca4 9c52c280 b8f1cc7b 890bd358 9c52c2b0 00000000 igxpmp32+0x9a5b 9c52c294 b8f11729 890bd358 9c52c2b0 00000a0c igxpmp32+0x5c7b 9c52c358 804ef19f 890bd040 86d2dad0 0000080c VIDEOPRT!pVideoPortDispatch+0xabf 9c52c368 bf85e8c2 9c52c610 bef6ce84 00000014 nt!IopfCallDriver+0x31 9c52c398 bf85e93c 890bd040 00232150 9c52c3f8 win32k!GreDeviceIoControl+0x93 9c52c3bc bebafc7b 890bd040 00232150 9c52c3f8 win32k!EngDeviceIoControl+0x1f 9c52d624 bebf3fa9 890bd040 bef2a28c bef2a284 igxpdx32+0x8c7b 9c52d6a0 8054167c 9c52da28 b915d000 9c52d744 igxpdx32+0x4cfa9 9c52d6a0 00000000 9c52da28 b915d000 9c52d744 nt!KiFastCallEntry+0xfc How do I ensure that crash is caused by H/W acceleration issue by looking at the above data? I am guessing VIDEOPRT!pVideoPortDispatch+0xabf indicates some error with the rendering. Is that correct? I am using WinDebug to view the crash dump.

    Read the article

  • jmap -histo is missing a lot of memory

    - by ripper234
    I have a JVM with 12 gigs of total RAM, out of which 7 GB is allocated to the old generation. There seems to be some memory leak, because almost the entire old gen is full, and will not release when I schedule a GC (the process is not doing anything else at that time). A jmap -histo dump only reveals less than 1 gigabyte worth of objects. Where are the missing 6 gigs? What better tool do you propose for diagnosing this? Here is the top of the jmap output: num #instances #bytes class name ---------------------------------------------- 1: 429853 68725736 <constMethodKlass> 2: 429853 51594040 <methodKlass> 3: 37503 49611368 <constantPoolKlass> 4: 37503 31109576 <instanceKlassKlass> 5: 191716 28019968 [C 6: 32573 26933152 <constantPoolCacheKlass> 7: 86158 13789560 [I 8: 53532 11244232 [B 9: 284 10507216 [J 10: 137608 7210664 <symbolKlass> 11: 203072 6498304 java.lang.String 12: 10132 5219512 <methodDataKlass> 13: 39694 4128176 java.lang.Class 14: 55713 3792816 [S 15: 61816 3141936 [[I 16: 90109 2883488 java.util.HashMap$Entry

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • how to use git rebase to clean up a convoluted history

    - by lsiden
    After working for several weeks with a half dozen different branches and merges, on both my laptop and work and my desktop at home, my history has gotten a bit convoluted. For example, I just did a fetch, then merged master with origin/master. Now, when I do git show-branches, the output looks like this: ! [login] Changed domain name. ! [master] Merge remote branch 'origin/master' ! [migrate-1.9] Migrating to 1.9.1 on Heroku ! [rebase-master] Merge remote branch 'origin/master' ---- - - [master] Merge remote branch 'origin/master' + + [master^2] A bit of re-arranging and cleanup. - - [master^2^] Merge branch 'rpx-login' + + [master^2^^2] Commented out some debug logging. + + [master^2^^2^] Monkey-patched Rack::Request#ip + + [master^2^^2~2] dump each request to log .... I would like to clean this up with a git rebase. I created a new branch, rebase-master, for this purpose, and on this branch tried git rebase <common-ancestor>. However, I have to resolve many conflicts, and the end result on branch rebase-master no longer matches the corresponding version on master, which has already been tested and works! I thought I saw a solution to this somewhere but can't find it anymore. Does anyone know how to do this? Or will these convoluted ref names go away when I start deleting un-needed branches that I have already merged with? I am the sole developer on this project, so there is no one else who will be affected.

    Read the article

  • Is it possible to unstick a remote IIS ASP server after an exception hangs the session?

    - by user89691
    I have been coding an app in classic ASP that accesses 2 Access databases. I had a page I was working on throw an exception, which is normal during development and causes no lasting problems. This time however, after the exception any attempt to open either of the databases would freeze the session with an infinite script timeout. If I delete the session cookie I an able to access ASP pages again until I try to open the database again. The database that was open when the exception was thrown is left open. There is a LDB lock file and I can't rename or delete either the LDB or MDB file, though I can download the MDB file with FTP. The 2nd access database is not open but any attempt to read this also hangs the session. Accessing HTML pages is fine. The site is hosted with Hostway and they are not interested ("Coding problem = Your problem" even though it leaves my site dead in the water, I suspect until the next reboot, whenever that might be). Here is the dump from the relevant ASP page that threw the exception: Active Server Pages error 'ASP 0115' Unexpected error /translatestats.asp A trappable error (C0000005) occurred in an external object. The script cannot continue running. Active Server Pages error 'ASP 0240' Script Engine Exception /translatestats.asp A ScriptEngine threw exception 'C0000005' in 'IActiveScript::Close()' from 'CActiveScriptEngine::FinalRelease()'. Is there any way I can unstick the site / force close the database remotely ?

    Read the article

  • Selenium tests not building due to NUnit error (Mono+OS X)

    - by Jem
    I'm running Selenium RC on my Mac and driving my tests using NUnit in C#. My problem is that when I try and build a simple test in Mono I get the following error. Error CS0433: The imported type `NUnit.Framework.Assert' is defined multiple times (CS0433) (TestProject) When I comment out the Assert's it runs fine. The code I'm using currently is just a dump from the openqa site using System; using System.Text; using System.Text.RegularExpressions; using System.Threading; using NUnit.Framework; using Selenium; namespace SeleniumTests { [TestFixture] public class AllTests { private ISelenium selenium; private StringBuilder verificationErrors; [SetUp] public void SetupTest () { selenium = new DefaultSelenium ("localhost", 4444, "*safari", "http://www.google.co.uk"); selenium.Start (); verificationErrors = new StringBuilder (); } [TearDown] public void TeardownTest () { try { selenium.Stop (); } catch (Exception) { // Ignore errors if unable to close the browser } Assert.AreEqual ("", verificationErrors.ToString ()); } [Test] public void GoogleHomepageTests () { // Open Google search engine. selenium.Open ("http://www.google.com/"); // Assert Title of page. Assert.AreEqual ("Google", selenium.GetTitle ()); // Provide search term as "Selenium OpenQA" selenium.Type ("q", "Selenium OpenQA"); // Read the keyed search term and assert it. Assert.AreEqual ("Selenium OpenQA", selenium.GetValue ("q")); // Click on Search button. selenium.Click ("btnG"); // Wait for page to load. selenium.WaitForPageToLoad ("5000"); // Assert that "www.openqa.org" is available in search results. Assert.IsTrue (selenium.IsTextPresent ("www.openqa.org")); // Assert that page title is - "Selenium OpenQA - Google Search" Assert.AreEqual ("Selenium OpenQA - Google Search", selenium.GetTitle ()); } } } Any ideas? Is it a OSX/Mono thing?

    Read the article

  • Trouble with piping through sed

    - by Joel
    I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere. wget -r -nv http://127.0.0.1:3000/test.html Outputs: 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1] I pipe the output through sed to get a clean list of URLs: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' Outputs: http://127.0.0.1:3000/test.html http://127.0.0.1:3000/robots.txt http://127.0.0.1:3000/shop I would like to then dump the output to file, so I do this: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE I interrupt the process after a few seconds and check the file, yet it is empty. Interesting, the following yields no output (same as above, but piping sed output through cat): wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat Why can I not pipe the output of sed to another program like cat?

    Read the article

  • phppgadmin : How does it kick users out of postgres, so it can db_drop?

    - by egarcia
    I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump. Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error. I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible. Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert. I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says: $data->dropSchema(...) $data is a global variable and I could not find where it is defined. Any pointers would be greatly appreciated.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • How to Capture a live stream from Windows Media Server 2008

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • curl halts script execution

    - by Funky Dude
    my script uses curl to upload images to smugsmug site via smugsmug api. i loop through a folder and upload every image in there. but after 3-4 uploads, curl_exec would fail, stopped everything and prevent other images from uploading. $upload_array = array( "method" => "smugmug.images.upload", "SessionID" => $session_id, "AlbumID" => $alb_id, "FileName" => zerofill($n, 3) . ".jpg", "Data" => base64_encode($data), "ByteCount" => strlen($data), "MD5Sum" => $data_md5); $ch = curl_init(); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_POSTFIELDS, $upload_array); curl_setopt( $ch, CURLOPT_URL, "https://upload.smugmug.com/services/api/rest/1.2.2/"); $upload_result = curl_exec($ch); //fails here curl_close($ch); updated: so i added logging into my script. when it does fail, the logging stops after fwrite($fh, "begin curl\n"); fwrite($fh, "begin curl\n"); $upload_result = curl_exec($ch); fwrite($fh, "curl executed\n"); fwrite($fh, "curl info: ".print_r(curl_getinfo($ch,true))."\n"); fwrite($fh, "xml dump: $upload_result \n"); fwrite($fh, "curl error: ".curl_error($ch)."\n"); i also curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60*60);

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >