Search Results

Search found 14176 results on 568 pages for 'functional programming'.

Page 533/568 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • How to parse a CSV file containing serialized PHP? [migrated]

    - by garbetjie
    I've just started dabbling in Perl, to try and gain some exposure to different programming languages - so forgive me if some of the following code is horrendous. I needed a quick and dirty CSV parser that could receive a CSV file, and split it into file batches containing "X" number of CSV lines (taking into account that entries could contain embedded newlines). I came up with a working solution, and it was going along just fine. However, as one of the CSV files that I'm trying to split, I came across one that contains serialized PHP code. This seems to break the CSV parsing. As soon as I remove the serialization - the CSV file is parsed correctly. Are there any tricks I need to know when it comes to parsing serialized data in CSV files? Here is a shortened sample of the code: use strict; use warnings; my $csv = Text::CSV_XS->new({ eol => $/, always_quote => 1, binary => 1 }); my $out; my $in; open $in, "<:encoding(utf8)", "infile.csv" or die("cannot open input file $inputfile"); open $out, ">outfile.000"; binmode($out, ":utf8"); while (my $line = $csv->getline($in)) { $lines++; $csv->print($out, $line); } I'm never able to get into the while loop shown above. As soon as I remove the serialized data, I suddenly am able to get into the loop. Edit: An example of a line that is causing me trouble (taken straight from Vim - hence the ^M): "26","other","1","20,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","37","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","6","1" "27","other","1","35,000 Subscriber Plan","Some test here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","38","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","7","1" "28","other","1","50,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","39","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","8","1""73","other","8","10,000,000","","","","0","","0","","0","0","recurring","0","","payment","","0","","","75","0","10000000","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:17:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","14","0"

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • I need help with 2D collision response (of stacking rotating polygons, with friction and gravity, for a game)

    - by Register Sole
    Hi I am looking for suggestions on how to write a collision response for game programming purpose (so not a scientific simulation). I am dealing with 2D polygons that are rotating, and I want them to be able to stack. I also want friction and gravity. I have a detection mechanism that returns the separating axis, how long the polygons are overlapping, and up to 2 points of contact. For the response, I am currently using an impulse-based response, which main idea is: find the separating axis, length of overlap, and the point of contact (if there are two, pick a random point between to simulate averaged force. i believe there are better ways than this) separate the object (modifying their positions, taking into account of their masses. i do not separate them completely though, to keep track that they are colliding to reduce jitter) calculate normal force based on the coefficient of restitution as if there is no friction. calculate friction, as if there is no normal force. I also assume that the direction of the friction is the same throughout the collision. apply the two forces (which result in a rather inaccurate result, since each force is calculated as if the other is not present. for non-rotating bodies though, this method is exact) I am aware that this method requires the coefficient of friction to be sufficiently small due to the assumption that the direction of friction stays the same in a collision. Also, the result is visually satisfying if gravity is not present. However, when there is gravity, objects on ground jitter and drift (even with zero coefficient of restitution)! It also happens for stacking objects. Larger coefficient of restitution and gravity increase the jittering. I hope you can help me with this. Some things i would like to know more about is how to handle collision with two point of contacts (how to end up having an object sitting still on the ground?), how to reduce, and prevent if possible, jitter and drift (do people use the most accurate method possible, or is there a trick to overcome this?), and how to handle multiple objects collision (for example, in the case of stacking objects, how do I check collisions between all of them and keep them all stable at every frame so they don't jitter?). A total reformulation of my algorithm is also welcomed, as long as it works. Another thing to note is that I am not making a Physics game, so I only need a visually satisfying response (though a realistic response is preferable, if it is not performance-heavy). But surely jittering and drifting objects on flat ground are not at all acceptable. In addition, I am a Physics student, so feel free to talk about impulse and whatever needed. Finally, I'm sorry for the long post. I tried to be as concise as I can. Thank you for reading it! EDIT It seems what I didn't manage to come up all this time is to separate resting contact as a class of its own and how to solve them. Currently reading the paper suggested by Jedediah. More suggestions on the topic are welcome :) CASE CLOSED After reading various papers referenced in the paper, successfully implemented simultaneous impulse method (referring to the original paper by Erin Catto, [Catt05]). Thanks maaaan!! The paper is wonderful. The current system is visibly much better than the previous. Still haven't separated resting contact as a class of its own though, which brings me to my next question. Love you all! Haha (sorry, I'm just so happy thanks to you).

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • Downloading Python 2.5.4 (from official website) in order to install it

    - by brilliant
    I was quite hesitant about whether I should post this question here on "StackOverflow" or on "SuperUser", but finally decided to post it here as Python is more a programming language rather than a piece of software. I've been recently using Python 2.5.4 that is installed on my computer, but at the moment I am not at home (and won't be for about two weeks from now), so I need to install the same version of Python on another computer. This computer has Windows XP installed – just like the one that I have at home. The reason why I need Python 2.5.4 is because I am using “Google App Engine”, and I was told that it only supports Python 2.5 However, when I went to the official Python page for the download, I discovered that certain things have changed, and I don’t quite remember where exactly from that site I had downloaded Python 2.5.4 on my computer at home. I found this page: http://www.python.org/download/releases/2.5.4/ Here is how it looks: (If you can’t see it here, please check it out at this address: http://brad.cwahi.net/some_pictures/python_page.jpg ) A few things here are not clear to me. It says: For x86 processors: python-2.5.4.msi For Win64-Itanium users: python-2.5.4.ia64.msi For Win64-AMD64 users: python-2.5.4.amd64.msi First of all, I don’t know what processor I am using – whether mine is “x86” or not; and also, I don’t know whether I am an “Win64-Itanium” or an “Win64-AMD64” user. Are Itanium and AMD64 also processors? Later it says: Windows XP and later already have MSI; many older machines will already have MSI installed. I guess, it is my case, but then I am totally puzzled as to which link I should click as it seems now that I don’t need those three previous links (as MSI is already installed on Windows XP), but there is no fourth link provided for those who use “Windows XP” or older machines. Of course, there are these words after that: Windows users may also be interested in Mark Hammond's win32all package, available from Sourceforge. but it seems to me that it is something additional rather than the main file. So, my question is simple: Where in the official Python website I can download Python 2.5.4, precisely, which link I should click?

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Getting HAPROXY to redirect http to https in users browser session

    - by Jon
    We are currently using a Internet cloud provider to host our SaaS platform. The platform consists of a Firewall - Cloud Provider SLB - - Apache Web Server - HAPROXY SLB - Liferay Platform We have had to use HAPROXY because of an issue with the cloud providers SLB that meant we were unable to use it for load balancing the Liferay platform applications. I have implemented HAPROXY in our secure tier and that seems to do the trick of load balancing the requests quite adequately. However during testing we encountered a functional issue whereby selecting a sub-menu from the web portal resulted in the application hanging, using an http analyser we saw that the request being passed back to the users browser was in http, from discussing this with the software vendor it transpires that the Liferay application has some hard-coded http links, and that other customers have worked around this by using physical NLB's such as F5 and redirecting the http traffic to https. The entry in the HAPROXY logs reads: haproxy[2717]: haproxy[2717]: <Apache Web Agent>:37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" :37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" The corresponding HTTP browser entry shows: http://<FQDN of ServiceDesk>/servicedesk/controller?docommand=renderradform&!key=esd_org019_frm_contact_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365665987887&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&sso_token=3NxsXYORMPp32SwL8ftVUCMH2QdWLH82&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fapplication-setup&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-10-26%2019:00:25.08 From reading through the forums and other sites it looks like we should be use to use HAPROXY to redirect the traffic to https, but try as I might I cant get it to work. This is our HAPROXY configuration: global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-openfire bind *:7070 default_backend openfire backend openfire balance roundrobin server <serverName> <IPv4 Address>:7070 check server <serverName> <IPv4 Address>:7070 check frontend http-uapi bind *:7080 default_backend uapi backend uapi balance roundrobin server <serverName> <IPv4 Address>:7080 check server <serverName> <IPv4 Address>:7080 check frontend http-sec bind *:8080 default_backend sec backend sec balance roundrobin server <serverName> <IPv4 Address>:8080 check server <serverName> <IPv4 Address>:8080 check frontend http-wall bind *:9080 default_backend wall backend wall balance roundrobin server <serverName> <IPv4 Address>:9080 check server <serverName> <IPv4 Address>:9080 check frontend http-xmpp bind *:9090 default_backend xmpp backend xmpp balance roundrobin server <serverName> <IPv4 Address>:9090 check server <serverName> <IPv4 Address>:9090 check frontend http-aim bind *:10080 default_backend aim backend aim balance roundrobin server <serverName> <IPv4 Address>:10080 check server <serverName> <IPv4 Address>:10080 check frontend http-servicedesk bind *:8081 default_backend servicedesk backend servicedesk balance roundrobin server <serverName> <IPv4 Address>:8081 check server <serverName> <IPv4 Address>:8081 check listen stats :1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth haproxy:<Password> I have tried following the articles listed posted on http://stackoverflow.com/questions/13227544/haproxy-redirecting-http-to-https-ssl and http://parsnips.net/haproxy-http-to-https-redirect/ but that hasn't made any difference. Am I on the right track with this or are we trying to achieve the impossible?, I'm hoping I'm just being an idiot and one of you good people can point me in the right direction.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Installing Cygwin C and C++ compilers for NetBeans IDE 7.2

    - by user1294663
    I am very new to Cygwin, C, C++ and NetBeans IDE 7.2. My PC is running MICROSOFT WINDOWS 7 OS. I have read the documentation on how to install the Cygwin C C++ compilers. http://netbeans.org/community/releases/72/cpp-setup-instructions.html#compilers I have tried to run Cygwin setup.exe that has the most recent version of the Cygwin DLL is 1.7.16-1. I am not very sure which exact package to install when the Cygwin setup.exe installer prompted for the selection of packages to download and install. I want to install the Cygwin C and C++ compilers so that i can create C and C++ projects using NetBeans 7.2 I selected those packages that has contains the following names gcc, g++, gdb and make. Then i proceed on to install the selected packages The installation took up a long time so i stopped after about 45 minutes or so. I browsed the installation folder and i saw some packages i selected were installed. I noticed that some packages came in some sort of "zip" file with tar.gz extension. i added the folder path into the PATH variable in the windows 7 environment variables window. I think this command works C: cygcheck -c cygwin but the rest doesn't work i think. C: gcc --version C: g++ --version C: make --version C: gdb --version I tried to create the C C++ project using the Netbeans IDE 7.2 and the IDE pops out a dialog message saying that there was no c c++ compilers found. Have i made some mistake here? like installing the wrong packages or something else??? Are there packages shown in the Cygwin setup.exe installer that contains exact names and exact version that is compatible with NetBeans IDE 7.2?? This i am not too sure. Because i i think i didn't really see some required packages with exact names and versions. My question is : Which exact packages do i install using the Cygwin setup.exe installer so that i can create C & C++ projects using Netbeans IDE 7.2? and what other steps do i have to take note to ensure complete successful installation? do i have to wait all the selected required packages to be installed? I WOULD LIKE TO KNOW THE EXACT NAMES AND THE VERSIONS FOR THE REQUIRED PACKAGES (NAMES AND VERSIONS DISPLAYED IN THE CYGWIN SETUP.EXE INSTALLER WHEN PROMPTED) NEEEDED FOR C & C++ PROGRAMMING USING NETBEANS IDE 7.2??

    Read the article

  • Workaround with paths in vista

    - by Argiropoulos Stavros
    Not really a programming question but i think quite relative. I have a .exe running in my windows XP PC. This exe needs a file in the same directory to run and has no problem finding it in XP BUT in Vista(I tried this in several machines and works in some of them) fails to run. I'm guessing there is a problem finding the path.The program is written in basic(Yes i know..) I attach the code below. Can you think of any workarounds? Thank you The exe is located in c:\tools Also the program runs in windows console(It starts but then during the execution cannot find a custom file type .TOP made by the creator of the program) ' PROGRAMM TOP11.BAS DEFDBL A-Z CLS LOCATE 1, 1 COLOR 14, 1 FOR i = 1 TO 80 PRINT "±"; NEXT i LOCATE 1, 35: PRINT "?? TOP11 ??" PRINT " €€‚—‚„‘ ’— ‹„’†‘„— ‘’† „”€„€ ’†‘ ‡€€‘‘†‘ ‰€ † „.‚.‘.€. " COLOR 7, 0 PRINT "-------------------------------------------------------------------------------" PRINT INPUT "ƒ?©« «¦¤ ©¬¤«?©«? ¤???... : ", Factor# INPUT "¤¦£ ¨®e¦¬ [.TOP] : ", topfile$ VIEW PRINT 7 TO 25 file1$ = topfile$ + ".TOP" file2$ = topfile$ + ".T_P" file3$ = "Syntel" OPEN file3$ FOR OUTPUT AS #3 PRINT #3, " ‘¬¤«?©«?? ¤??? = " + STR$(Factor#) + " †‹„‹†€: " + DATE$ CLOSE #3 command1$ = "copy" + " " + file1$ + " " + file2$ SHELL command1$ '’¦ ¨®e¦ .TOP ¤« ¨a­«  £ «¤ ?«a?¥ .T_P OPEN file2$ FOR INPUT AS #1 OPEN file1$ FOR OUTPUT AS #2 bb$ = " \\\ \ , ###.#### ###.#### ####.### ##.### " DO LINE INPUT #1, Line$ Line$ = RTRIM$(LTRIM$(Line$)) icode$ = LEFT$(Line$, 1) IF icode$ = "1" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "2" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "3" THEN Number$ = MID$(Line$, 3, 6) Hangle = VAL(MID$(Line$, 14, 9)) Zangle = VAL(MID$(Line$, 25, 9)) Distance = VAL(MID$(Line$, 36, 9)) Distance = Distance * Factor# Height = VAL(MID$(Line$, 48, 6)) PRINT #2, USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height PRINT USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height ELSE END IF LOOP UNTIL EOF(1) VIEW PRINT CLS LOCATE 1, 1 PRINT " *** ’„‘ ’“ ‚€‹‹€’‘ *** " END

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Media Archive System with branches?

    - by Ian McEwen
    In short, how can I get VCS features (revisioning, branching, and deduplication) for a media collection that's far too large for most/all VCS systems? Background I have a 300GB music folder; unfortunately, I only have the hard drive space for this on my desktop system. However, a good portion of my collection is FLAC; therefore, I could theoretically have a space-optimized version in which I transcode all the FLAC to mp3 or some other lossy format, and use only that version on the laptop. However, a portion of my collection isn't FLAC. And that which isn't FLAC shouldn't be transcoded to an equivalent format; it won't have any space savings, which is the point. Moreover, it shouldn't be duplicated: the mp3/ogg portions of the collection should probably be exactly the same files. Thoughts One solution is to have format-specific organization of my music folders, and use some script to transcode the FLAC directory to mp3 or such into another directory. Another is some sort of hack using entirely separate copies and symbolic links for deduplication, or something similar. But these also have a disadvantage of lacking versioning; I'd like to be able to reorganize my music collection, retag things, etc. and save history. This isn't key, but would be awfully nice. I can't see it as entirely unreasonable to set up VCS hooks or something equivalent to keep directory structure synced between two copies, update tags, and transcode FLAC automatically into the space-optimized copy. Basically, the system I really want is a version control system. Two branches: one archival/desktop branch including the FLAC, one space-optimized/laptop branch without it; most VCSes would deal well with whole chunks being the same files well by compressing in a reasonable way (i.e. don't keep two copies of the same data). I could also do a lot of what I talk about above with hooks. But I don't know of any VCS that would deal with a 300GB repository with almost 20k files. Many of them would just not even initialize the whole affair; others would just do it inexpressibly slowly or otherwise badly. checkpoint looks like it's designed for something close (it's at least for media), but wouldn't do deduplication well (and I'm not convinced I'd be able to script it to do things like automatic transcoding and directory-structure syncing). So. Is there anything out there that can do all this, or should I consider it a programming project?

    Read the article

  • Sporadic disk clicking sound

    - by Abdó
    Hi, I'm having some unusual and sporadic hard disk clicking issues. Here is a cronological description of the facts. I'm using an ASUS P6T-SE with Intel Core i7, 6Gb RAM 600W Power supply and ATI4670 graphics, running Ubuntu 10.10. About one month ago my hard disk (SATA II Seagate Barracuda 1Tb 7200 rpm) started making a clicking sound: a sort of loud tic-tac, every second or so, when involved in disk activity. The system was clearly slower than before at disk access, but it was functional and I could not find any signal of trouble on the linux logs. I disconnected the disk and tried an older SATA drive I had around: no problem with it. Then I reconnected the Seagate disk, and the problem was mysteriously gone. Ubuntu booted normally, usual speed, no clicking. A couple of weeks later, the problem reappeared. I tried disconnecting reconnecting (as it somehow solved the problem before) without luck. So, despite it was a rather new drive, I assumed it was a hardware issue, made backups and bought a new drive. The new drive is a SATA II Seagate Barracuda 1.5 Tb 7200 rpm. I installed both drives at the same time, with the intention of transferring my files from on to the other. To my surprise, when I booted the computer with both drives, both started making the clicking sound !! Even worse, I removed the old drive, leaving the unformated new drive connected, and booted from a LiveCD. It kept clicking ! Puzzled by this, I tried both drives on my laptop with a SATA to USB cable. At the moment I connected any of them, they made one or two unusual clicks and immediately stopped doing that and worked normally. The old drive I thought almost dead, was working like a charm as if nothing happened. Then I thought: "ok, it must be the motherboard. Let's try again". So, I reconnected the old drive to the ASUS P6T motherboard (the same cables and SATA port as before), and it worked as if nothing happened ! The problem was gone again. The new 1.5 Tb drive was also working ok: No clicking nor slowdown. So I left the old 1Tb disk connected and kept using the computer daily during 3 weeks, until today it happened again. Now I don't really know what to do or check. I'm not even sure if it is a hardware issue any more ! This is rather annoying as it seems it happens with a period of 2 or 3 weeks and I have no means of forcing it to happen. Does anyone have a clue of what can causes this behaviour or have any suggestions of things I should check when it happens again ? What I did today is checking some SMART parameters Error log: smartctl -l error /dev/sda. No errors Short selftest: smartctl -t short /dev/sda. No errors Disk Health check: smartctl -H /dev/sda. passed And here are the vendor specific parameters (smartctl -A /dev/sda) Which I'm not quite sure how to interpret. === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 235962588 3 Spin_Up_Time 0x0003 095 095 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 187 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 16348045 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3590 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 94 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 097 000 Old_age Always - 4295164029 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 070 057 045 Old_age Always - 30 (Lifetime Min/Max 19/31) 194 Temperature_Celsius 0x0022 030 043 000 Old_age Always - 30 (0 18 0 0) 195 Hardware_ECC_Recovered 0x001a 037 026 000 Old_age Always - 235962588 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 73950746906346 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 1832967731 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3294986902 Any clue to this mystery will be really welcome. Thank you very much !!

    Read the article

  • Problem with several USB devices on Windows XP. How to diagnose USB device?

    - by Lukasz Baran
    Hello All, Recently I've bought some electronic devices (like e.g. Sony Ebook reader PRS-600 or HP PDA IPaq) which are connected to PC using USB ports. These devices have own batteries and they are recharged using USB. However, I am experiencing a problem with Ebook Reader and the similiar thing happens with my PDA (but in this case it's more rare). Sometimes, I am unable to make it start recharging, when it's connected to USB port. The LCD diode on my Reader lights up and the icon in the system tray appears. They both indicate that the device has been connected. However, the device should start recharching and it should appear on Windows XP as a 'explorable' device (just like pendrive it offers some HDDs). None of these happens:( This problem appears on two PCs - desktop and laptop. Both have Windows XP SP3 installed. And what's interesting and surprising to me is the fact that these devices sometimes work without any problems. I mean, I just plug them in and they work properly. Besides that, there are some computers (I have tested a variety of possible conditions) on which the problem does not appear! I am confused and, on the other hand, determined to find out what is the real cause of such strange behavior. Maybe I'm wrong but I always assumed that I'm using deterministic machines, but the described situation makes me feel like I was dreaming:) And that's why I'm asking you here. Are there any tools that could help me diagnose USB ports? Or maybe anyone knows the solution? I have a lot experience with low level programming (mostly from the times of MS-DOS applications), but I'm not an expert when it comes to WinXP technology. If this was a problem with network connection, I would know what to do - I know how to track network traffic, but with USB ports I feel powerless. I have a feeling that the problem may lie in a way WinXP handles USB devices. Personally, I hate auto-discovery and P'n'P features available in Windows. I know that some of may suggest to move to Linux or other *Nix system. Unfortunately, I have to use WinXP in my professional life, so it's rather impossible. Besides that, I don't consider XP as a total disaster. Any help from you will be appreciated!

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • cakephp & nginx config/rewrite rules

    - by seanl
    Hi somebody please help me out, I've asked this at stackoverflow as well but not got much of a response and was debating whether it was programming or server related. I’m trying to setup a cakephp environment on a Centos server running Nginx with Fact CGI. I already have a wordpress site running on the server and a phpmyadmin site so I have PHP configured correctly. My problem is that I cannot get the rewrite rules setup correct in my vhost so that cake renders pages correctly i.e. with styling and so on. I’ve googled as much as possible and the main consensus from the sites like the one listed below is that I need to have the following rewrite rule in place location / { root /var/www/sites/somedomain.com/current; index index.php index.html; # If the file exists as a static file serve it # directly without running all # the other rewrite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/(.+)$ /index.php?url=$1 last; break; } } http://blog.getintheloop.eu/2008/4/17/nginx-engine-x-rewrite-rules-for-cakephp problem is these rewrite assume you run cake directly out of the webroot which is not what I want to do. I have a standard setup for each site i.e. one folder per site containing the following folders log, backup, private and public. Public being where nginx is looking for its files to serve but I have cake installed in private with a symlink in public linking back to /private/cake/ this is my vhost server { listen 80; server_name app.domain.com; access_log /home/public_html/app.domain.com/log/access.log; error_log /home/public_html/app.domain.com/log/error.log; #configure Cake app to run in a sub-directory #Cake install is not in root, but elsewhere and configured #in APP/webroot/index.php** location /home/public_html/app.domain.com/private/cake { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/private/cake/$1 last; break; } } location /home/public_html/app.domain.com/private/cake/ { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/public/index.php?url=$1 last; break; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/public_html/app.domain.com/private/cake$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Now like I said I can see the main index.php of cake and have connected it to my DB but this page is without styling so before I proceed any further I would like to configure it correctly. What am I doing wrong………. Thanks seanl

    Read the article

  • Overriding the save() method of a model that uses django-mptt

    - by saturdayplace
    I've been using django-mptt in my project for a while now, it's fabulous. Recently, I've found a need to override a model's save() method that uses mptt, and I'm getting an error when I try to save a new instance of that model: Exception Type: ValueError at /admin/scrivener/page/add/ Exception Value: Cannot use None as a query value I'm assuming that this is a result of the fact that the instance hasn't been stuck into a tree yet, but I'm not sure how to go about fixing this. I added a comment about it onto a similar issue on the project's tracker, but I was hoping that someone here might be able to put me on the right track faster. Here's the traceback. Environment: Request Method: POST Request URL: http://localhost:8000/admin/scrivener/page/add/ Django Version: 1.2 rc 1 SVN-13117 Python Version: 2.6.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'django.contrib.sitemaps', 'mptt', 'filebrowser', 'south', 'haystack', 'django_static', 'etc', 'scrivener', 'gregor', 'annunciator'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Traceback: File "B:\django-apps\3rd Party Source\django\core\handlers\base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in wrapper 239. return self.admin_site.admin_view(view)(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapped_view 74. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\views\decorators\cache.py" in _wrapped_view_func 69. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\contrib\admin\sites.py" in inner 190. return view(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapper 21. return decorator(bound_func)(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapped_view 74. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in bound_func 17. return func(self, *args2, **kwargs2) File "B:\django-apps\3rd Party Source\django\db\transaction.py" in _commit_on_success 299. res = func(*args, **kw) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in add_view 795. self.save_model(request, new_object, form, change=False) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in save_model 597. obj.save() File "B:\django-apps\scrivener\models.py" in save 205. self.url = self.get_absolute_url() File "B:\django-apps\3rd Party Source\django\utils\functional.py" in _curried 55. return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) File "B:\django-apps\3rd Party Source\django\db\models\base.py" in get_absolute_url 940. return settings.ABSOLUTE_URL_OVERRIDES.get('%s.%s' % (opts.app_label, opts.module_name), func)(self, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\__init__.py" in inner 31. bits = func(*args, **kwargs) File "B:\django-apps\scrivener\models.py" in get_absolute_url 194. for ancestor in self.get_ancestors(): File "B:\django-apps\3rd Party Source\mptt\models.py" in get_ancestors 23. opts.tree_id_attr: getattr(self, opts.tree_id_attr), File "B:\django-apps\3rd Party Source\django\db\models\manager.py" in filter 141. return self.get_query_set().filter(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\query.py" in filter 550. return self._filter_or_exclude(False, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\query.py" in _filter_or_exclude 568. clone.query.add_q(Q(*args, **kwargs)) File "B:\django-apps\3rd Party Source\django\db\models\sql\query.py" in add_q 1131. can_reuse=used_aliases) File "B:\django-apps\3rd Party Source\django\db\models\sql\query.py" in add_filter 1000. raise ValueError("Cannot use None as a query value") Exception Type: ValueError at /admin/scrivener/page/add/ Exception Value: Cannot use None as a query value

    Read the article

  • Perl Scripts Seem to be Caching?

    - by Ben Liyanage
    I'm running a perl script via mod_perl with Apache. I am trying to parse parameters in a restful fashion (aka GET www.domain.com/rest.pl/Object/ID). If I specify the ID like this: GET www.domain.com/rest.pl/Object/1234 I will get object 1234 back as a result as expected. However, if I specify an incorrect hacked url like this GET www.domain.com/rest.pl/Object/123 I will also get back object 1234. I am pretty sure that the issue in this question is happening so I packaged up my method and invoked it from my core script out of the package. Even after that I am still seeing the threads returning seemingly cached data. One of the recommendations in the aforementioned article is to replace my with our. My impression from reading up on our vs my was that our is global and my is local. My assumption is the the local variable gets reinitialized each time. That said, like in my code example below I am resetting the variables each time with new values. Apache Perl Configuration is set up like this: PerlModule ModPerl::Registry <Directory /var/www/html/perl/> SetHandler perl-script PerlHandler ModPerl::Registry Options +ExecCGI </Directory> Here is the perl script getting invoked directly: #!/usr/bin/perl -w use strict; use warnings; use Rest; Rest::init(); Here is my package Rest. This file contains various functions for handling the rest requests. #!/usr/bin/perl -w package Rest; use strict; # Other useful packages declared here. our @EXPORT = qw( init ); our $q = CGI->new; sub GET($$) { my ($path, $code) = @_; return unless $q->request_method eq 'GET' or $q->request_method eq 'HEAD'; return unless $q->path_info =~ $path; $code->(); exit; } sub init() { eval { GET qr{^/ZonesByCustomer/(.+)$} => sub { Rest::Zone->ZonesByCustomer($1); } } } # packages must return 1 1; As a side note, I don't completely understand how this eval statement works, or what string is getting parsed by the qr command. This script was largely taken from this example for setting up a rest based web service with perl. I suspect it is getting pulled from that magical $_ variable, but I'm not sure if it is, or what it is getting populated with (clearly the query string or url, but it only seems to be part of it?). Here is my package Rest::Zone, which will contain the meat of the functional actions. I'm leaving out the specifics of how I'm manipulating the data because it is working and leaving this as an abstract stub. #!/usr/bin/perl -w package Rest::Zone; sub ZonesByCustomer($) { my ($className, $ObjectID) = @_; my $q = CGI->new; print $q->header(-status=>200, -type=>'text/xml',expires => 'now', -Cache_control => 'no-cache', -Pragma => 'no-cache'); my $objectXML = "<xml><object>" . $ObjectID . "</object></xml>"; print $objectXML; } # packages must return 1 1; What is going on here? Why do I keep getting stale or cached values?

    Read the article

  • backtracking in haskell

    - by dmindreader
    I have to traverse a matrix and say how many "characteristic areas" of each type it has. A characteristic area is defined as a zone where elements of value n or n are adjacent. For example, given the matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There's a single characteristic area of type 1 which is equal to the original matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There are two characteristic areas of type 2: 0 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 3 0 0 And one characteristic area of type 3: 0 0 0 0 0 0 0 0 0 3 0 0 So, for the function call: countAreas [[0,1,2,2],[0,1,1,2],[0,3,0,0]] The result should be [1,2,1] I haven't defined countAreas yet, I'm stuck with my visit function when it has no more possible squares in which to move it gets stuck and doesn't make the proper recursive call. I'm new to functional programming and I'm still scratching my head about how to implement a backtracking algorithm here. Take a look at my code, what can I do to change it? move_right :: (Int,Int) -> [[Int]] -> Int -> Bool move_right (i,j) mat cond | (j + 1) < number_of_columns mat && consult (i,j+1) mat /= cond = True | otherwise = False move_left :: (Int,Int) -> [[Int]] -> Int -> Bool move_left (i,j) mat cond | (j - 1) >= 0 && consult (i,j-1) mat /= cond = True | otherwise = False move_up :: (Int,Int) -> [[Int]] -> Int -> Bool move_up (i,j) mat cond | (i - 1) >= 0 && consult (i-1,j) mat /= cond = True | otherwise = False move_down :: (Int,Int) -> [[Int]] -> Int -> Bool move_down (i,j) mat cond | (i + 1) < number_of_rows mat && consult (i+1,j) mat /= cond = True | otherwise = False imp :: (Int,Int) -> Int imp (i,j) = i number_of_rows :: [[Int]] -> Int number_of_rows i = length i number_of_columns :: [[Int]] -> Int number_of_columns (x:xs) = length x consult :: (Int,Int) -> [[Int]] -> Int consult (i,j) l = (l !! i) !! j visited :: (Int,Int) -> [(Int,Int)] -> Bool visited x y = elem x y add :: (Int,Int) -> [(Int,Int)] -> [(Int,Int)] add x y = x:y visit :: (Int,Int) -> [(Int,Int)] -> [[Int]] -> Int -> [(Int,Int)] visit (i,j) vis mat cond | move_right (i,j) mat cond && not (visited (i,j+1) vis) = visit (i,j+1) (add (i,j+1) vis) mat cond | move_down (i,j) mat cond && not (visited (i+1,j) vis) = visit (i+1,j) (add (i+1,j) vis) mat cond | move_left (i,j) mat cond && not (visited (i,j-1) vis) = visit (i,j-1) (add (i,j-1) vis) mat cond | move_up (i,j) mat cond && not (visited (i-1,j) vis) = visit (i-1,j) (add (i-1,j) vis) mat cond | otherwise = vis

    Read the article

  • Django - no module named app

    - by Koran
    Hi, I have been trying to get an application written in django working - but it is not working at all. I have been working on for some time too - and it is working on dev-server perfectly. But I am unable to put in the production env (apahce). My project name is apstat and the app name is basic. I try to access it as following Blockquote http://hostname/apstat But it shows the following error: MOD_PYTHON ERROR ProcessId: 6002 Interpreter: 'domU-12-31-39-06-DD-F4.compute-1.internal' ServerName: 'domU-12-31-39-06-DD-F4.compute-1.internal' DocumentRoot: '/home/ubuntu/server/' URI: '/apstat/' Location: '/apstat' Directory: None Filename: '/home/ubuntu/server/apstat/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 228, in handler return ModPythonHandler()(req) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 201, in __call__ response = self.get_response(request) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 134, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 154, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 40, in technical_500_response html = reporter.get_traceback_html() File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 114, in get_traceback_html return t.render(c) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 178, in render return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 779, in render bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: No module named basic Original Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 87, in render output = force_unicode(self.filter_expression.resolve(context)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 572, in resolve new_obj = func(obj, *arg_vals) File "/usr/lib/pymodules/python2.6/django/template/defaultfilters.py", line 687, in date return format(value, arg) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 269, in format return df.format(format_string) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 175, in r return self.format('D, j M Y H:i:s O') File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/encoding.py", line 71, in force_unicode s = unicode(s) File "/usr/lib/pymodules/python2.6/django/utils/functional.py", line 201, in __unicode_cast return self.__func(*self.__args, **self.__kw) File "/usr/lib/pymodules/python2.6/django/utils/translation/__init__.py", line 62, in ugettext return real_ugettext(message) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 286, in ugettext return do_translate(message, 'ugettext') File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 276, in do_translate _default = translation(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 180, in _fetch app = import_module(appname) File "/usr/lib/pymodules/python2.6/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named basic My settings.py is as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'apstat.basic', 'django.contrib.admin', ) If I remove the apstat.basic, it goes through, but that is not a solution. Is it something I am doing in apache? My apache - settings are - <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/ubuntu/server/ <Directory /> Options None AllowOverride None </Directory> <Directory /home/ubuntu/server/apstat> AllowOverride None Order allow,deny allow from all </Directory> <Location "/apstat"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE apstat.settings PythonOption django.root /home/ubuntu/server/ PythonDebug On PythonPath "['/home/ubuntu/server/'] + sys.path" </Location> </VirtualHost> I have now sat for more than a day on this. If someone can help me out, it would be very nice.

    Read the article

  • Comments show up in database, but only show up on my index page after a refresh.

    - by Truong
    Hi, I have AJAX, PHP, jquery, and mySQL in this very simple website I'm trying to make. All there is is a text area that sends data to the database and uses ajax\jquery to display that data onto the index page. For some reason though, I press submit and the data goes to the database, but I have to refresh the page myself to see that data on the page. I'm assuming that the problem has to do with my AJAX JQuery or even some mistake in the index. Also, when I type the text into the text area and press submit, the text remains in the textarea until I refresh the page. Haha, sorry if this is such a noob question.. I'm trying to learn. Thanks so much Here is the AJAX: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { $(".submit").click(function() { var comment = $("#comment").val(); var post_id = $("#post").val(); var dataString = '&comment=' + comment if(comment=='') { alert('Fill something in please!'); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="noworries.jpg" /> '); $.ajax({ type: "POST", url: "commentajax.php", data: dataString, cache: false, success: function(html){ $("ol#update").append(html); $("ol#update li:last").fadeIn("slow"); $("#flash").hide(); } }); }return false; }); }); </script> Here is the index\form area: <body> <div id="container"><img src="banner.jpg" width="890" height="150" alt="title" /></div> <id="update" class="timeline"> <div id="flash"></div> <div id="container"> <form action="#" method="post"> <textarea name="comment" id="comment" cols="35" rows="4"></textarea><br /> <input name="submit" type="submit" class="submit" id="submit" value=" Submit Comment " /><br /> </form> </div> <id="update" class="timeline"> <?php include('config.php'); //$post_id value comes from the POSTS table $prefix="I'm happy when"; $sql=mysql_query("select * from comments order by com_id desc"); while($row=mysql_fetch_array($sql)) { $comment=$row['com_dis']; ?> <!--Displaying comments--> <div id="container"> <class="box"> <?php echo "$prefix $comment"; ?> </div> <?php } ?> Here is my commentajax.php <?php include('config.php'); if($_POST) { $comment=$_POST['comment']; $comment=mysql_real_escape_string($comment); mysql_query("INSERT INTO comments(com_id,com_dis) VALUES ('NULL', '$comment')"); } ?> <li class="box"><br /> <?php echo $comment; ?> </li> I'm sorry for so much code but I just started learning this four days ago and this is probably one of the last bugs until the website is functional.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >