Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 366/401 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How do I make an external hard drive keep the same drive letter permanently?

    - by andygrunt
    I have a desktop PC (2002 vintage) running Windows XP that I turn on about 2 or 3 times per week. I have a mains powered 250Gb Western Digital hard disk connected to it via USB. I always turn the hard disk on before the PC so it's up and running as the PC boots. When I first connected the external hard disk, the PC assigned it a letter ('i' if it matters) and I've installed software to it, created shortcuts to various files and folders on the disk using that letter. For years everything was fine then I would boot the PC and the hard disk was assigned a different letter. I'd then have to go into 'my computer/manage/disk management' and manually change the letter back to 'i'. If I then rebooted the PC, the hard disk would usually still be 'i' but after the next reboot would be some other random letter and I have to manually change it back to 'i'. This would go on for some time then there'd be periods when the it would always be 'i' then, for no apparent reason (no new devices added, for example), the drive letter would start changing again. At the moment it's in random drive letter mood so I thought I'd ask the following question... How do I assign the external hard disk to be 'i' permanently? Answer: Thanks Molly that seems to have done the trick (after a little fiddling) - slightly disappointed there wasn't a way to do it within Windows without installing something else though. For anyone else trying this, it wasn't completely straightforward so here's what happened with me. I installed USBDLM as per the instructions on its website. I guessed that I had to assign the first USB letter to i so replaced the 'Letter1=' lines to 'Letter=I' in the ini file. To test it, I rebooted the PC only to find it came back up with the display set to 640x480 in 16 colours. After some investigation, I re-installed the display drivers and rebooted and set the display back to its usual setting. The external hard disk now gets set to 'i' but I found I had to re-apply sharing status to it so it was seen from my laptop which is on the same network. The end result of all this is that it now does what I wanted although it does act as though the hard drive has just been plugged in a few seconds after the Windows desktop appears i.e. the little box appears with a progress bar as it searches through the contents of the 'new' hard drive and I eventually get a dialogue box saying 'This disk or device contains more than one type of content. What do you want Windows to do?' and lists options such as play media files, print the pictures or open folder to view the files. This is a tiny pain I wish didn't happen but not exactly a huge price to pay. Other than that - it seems to work fine :) Looks like a spoke too soon... Every time I reboot, I have to re-share the 'i' drive (which I didn't have to do before) so it can be seen by my laptop on the same network. Any ideas how to make that permanent?

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • Table Sorting in Excel 2010 Cannot Parse the Table Headers Correctly

    - by Truth
    I have a rather weird issue I've never faced before. After defining my table with borders and such, and filling out data in my table, I try to sort my table according to the "ratio" (first) column, from biggest to smallest. When I right click the header and select the corresponding option, the table gets sorted, but the first row is omitted by the sorting function. What I mean is that the first line (with 3.50 ratio) will forever stay at the top line, even when I sort otherwise (by a different column, in a different order). This is my table below, it's tab separated so it's not very readable, but I hope you'll manage. ???? ??????? ????? ??? ???? ??? ??? ??? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ?????? 23 8 0 0 0 0 0 0 0 0 0 0 0 0 0 2.88 2.88 ???? 16 4 0 0 0 0 0 0 0 0 0 0 0 0 0 4.00 4.00 ??? 7 2 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 13 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.25 3.25 ????? 12 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ???? 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 2.50 2.50 ??? 38 12 0 0 0 0 0 0 0 0 0 0 0 0 0 3.17 3.17 ???? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ????? 31 10 0 0 0 0 0 0 0 0 0 0 0 0 0 3.10 3.10 ???? 24 8 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ????? 23 8 0 0 0 0 0 0 0 0 0 0 0 0 0 2.88 2.88 ???? 14 4 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 16 4 0 0 0 0 0 0 0 0 0 0 0 0 0 4.00 4.00 ???? 24 8 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ???? 30 10 0 0 0 0 0 0 0 0 0 0 0 0 0 3.00 3.00 ????? 21 6 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ???? 42 12 0 0 0 0 0 0 0 0 0 0 0 0 0 3.50 3.50 ??? 11 4 0 0 0 0 0 0 0 0 0 0 0 0 0 2.75 2.75 ???? 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.50 2.50 ???? 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.00 2.00 ??? 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2.00 2.00

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • undefined reference to `main' collect2: ld returned 1 exit status

    - by sobingt
    I am working on this QT project and i am making test cases for my project. Here is a small test case #include <QApplication> #include <QPalette> #include <QPixmap> #include <QSplashScreen> #include <qthread.h> #define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp> #include <boost/make_shared.hpp> # include <boost/thread.hpp> #include "MainWindow.h" namespace { const std::string dbname = "Project.db"; struct SongFixture { SongFixture(const std::string &fixturePath) { // Create the Master file Master::creator(); // Create/open file std::pair<int, SQLiteDbPtr> result = open( dbname, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); if (result.first != SQLITE_OK) { throw SQLiteError(result.first, sqlite3_errmsg(result.second.get())); } SQLiteDbPtr &spDb = result.second; // Execute all the SQL from the fixture file execSQLFromFile(spDb, fixturePath); } }; std::auto_ptr<SongFixture> pf; } class I : public QThread { public: static void sleep(unsigned long secs) { QThread::sleep(secs); } }; void free_test_function() { BOOST_CHECK(true ); } test_suite* init_unit_test_suite(int argc, char *argv[]) { // Create a fixture for the peer: // Manage fixture creation manually instead of using // BOOST_FIXTURE_TEST_CASE because the fixture depends on runtime args. std::ostringstream fixturePathSS; fixturePathSS << PROJECT_DIR << "/test/songs_fixture.sql"; std::string fixturePath = fixturePathSS.str(); pf.reset(new SongFixture(fixturePath)); QApplication app(argc, argv); MainWindow window("artists"); window.show(); framework::master_test_suite().add( BOOST_TEST_CASE( &free_test_function )); return app.exec(); } Well i am getting any error /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/crt1.o: In function _start': (.text+0x20): undefined reference tomain' collect2: ld returned 1 exit status pls help me you have a lead..thankz I tired adding #define BOOST_TEST_MAIN then i get ../test/UI/main.cpp: In function ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’: ../test/UI/main.cpp:75:31: error: redefinition of ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’ /usr/local/include/boost/test/unit_test_suite.hpp:223:1: error: ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’ previously defined here Well the program is working in Windows but in Linux the above mention problem is observed

    Read the article

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • RUBY Nokogiri CSS HTML Parsing

    - by user296507
    I'm having some problems trying to get the code below to output the data in the format that I want. What I'm after is the following: CCC1-$5.00 CCC1-$10.00 CCC1-$15.00 CCC2-$7.00 where $7 belongs to CCC2 and the others to CCC1, but I can only manage to get the data in this format: CCC1-$5.00 CCC1-$10.00 CCC1-$15.00 CCC1-$7.00 CCC2-$5.00 CCC2-$10.00 CCC2-$15.00 CCC2-$7.00 Any help would be appreciated. require 'rubygems' require 'nokogiri' require 'open-uri' doc = Nokogiri::HTML.parse(<<-eohtml) <div class="AAA"> <table cellspacing="0" cellpadding="0" border="0" summary="sum"> <tbody> <tr> <td class="BBB"> <span class="CCC">CCC1</span> </td> <td class="DDD"> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr><td class="FFF">$5.00</td></tr> <tr><td class="FFF">$10.00</td></tr> <tr><td class="FFF">$15.00</td></tr> </tbody> </table> </td> </tr> </tbody> </table> <table cellspacing="0" cellpadding="0" border="0" summary="sum"> <tbody> <tr> <td class="BBB"> <span class="CCC">CCC2</span> </td> <td class="DDD"> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr><td class="FFF">$7.00</td></tr> </tbody> </table> </td> </tr> </tbody> </table> </div> eohtml doc.css('td.BBB > span.CCC').each do |something| doc.css('tr > td.EEE, tr > td.FFF').each do |something_more| puts something.content + '-'+ something_more.content end end

    Read the article

  • IIS7 MVC deploy - 404 not found on actions that accept "id" parameter.

    - by majkinetor
    Hello. Once deployed parts of my web-application stop working. Index-es on each controller do work, and one form posting via Ajax, Login works too. Other then that yields 404. I understand that nothing particular should be done in integrated mode. I don't know how to proceed with troubleshooting. Some info: App is using default app pool set to integrated mode. WebApp is done in net framework 3.5. I use default routing model. Along web.config in root there is web.config in /View folder referencing HttpNotFoundHandler. OS is Windows Server 2008. Admins issued aspnet_regiis.exe -i IIS 7 Any help is appreciated. Thx. EDIT: I determined that only actions that accept ID parameter don't work. On the contrary, when I add dummy id method in Home controller of default MVC app it works. My Views/Web.config <?xml version="1.0"?> <configuration> <system.web> <httpHandlers> <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/> </httpHandlers> <!-- Enabling request validation in view pages would cause validation to occur after the input has already been processed by the controller. By default MVC performs request validation before a controller processes the input. To change this behavior apply the ValidateInputAttribute to a controller or action. --> <pages validateRequest="false" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <controls> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" /> </controls> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler"/> </handlers> </system.webServer> </configuration>

    Read the article

  • Django: What's an awesome plugin to maintain images in the admin?

    - by meder
    I have an articles entry model and I have an excerpt and description field. If a user wants to post an image then I have a separate ImageField which has the default standard file browser. I've tried using django-filebrowser but I don't like the fact that it requires django-grappelli nor do I necessarily want a flash upload utility - can anyone recommend a tool where I can manage image uploads, and basically replace the file browse provided by django with an imagepicking browser? In the future I'd probably want it to handle image resizing and specify default image sizes for certain article types. Edit: I'm trying out adminfiles now but I'm having issues installing it. I grabbed it and added it to my python path, added it to INSTALLED_APPS, created the databases for it, uploaded an image. I followed the instructions to modify my Model to specify adminfiles_fields and registered but it's not applying in my admin, here's my admin.py for articles: from django.contrib import admin from django import forms from articles.models import Category, Entry from tinymce.widgets import TinyMCE from adminfiles.admin import FilePickerAdmin class EntryForm( forms.ModelForm ): class Media: js = ['/media/tinymce/tiny_mce.js', '/media/tinymce/load.js']#, '/media/admin/filebrowser/js/TinyMCEAdmin.js'] class Meta: model = Entry class CategoryAdmin(admin.ModelAdmin): prepopulated_fields = { 'slug': ['title'] } class EntryAdmin( FilePickerAdmin ): adminfiles_fields = ('excerpt',) prepopulated_fields = { 'slug': ['title'] } form = EntryForm admin.site.register( Category, CategoryAdmin ) admin.site.register( Entry, EntryAdmin ) Here's my Entry model: class Entry( models.Model ): LIVE_STATUS = 1 DRAFT_STATUS = 2 HIDDEN_STATUS = 3 STATUS_CHOICES = ( ( LIVE_STATUS, 'Live' ), ( DRAFT_STATUS, 'Draft' ), ( HIDDEN_STATUS, 'Hidden' ), ) status = models.IntegerField( choices=STATUS_CHOICES, default=LIVE_STATUS ) tags = TagField() categories = models.ManyToManyField( Category ) title = models.CharField( max_length=250 ) excerpt = models.TextField( blank=True ) excerpt_html = models.TextField(editable=False, blank=True) body_html = models.TextField( editable=False, blank=True ) article_image = models.ImageField(blank=True, upload_to='upload') body = models.TextField() enable_comments = models.BooleanField(default=True) pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique_for_date='pub_date') author = models.ForeignKey(User) featured = models.BooleanField(default=False) def save( self, force_insert=False, force_update= False): self.body_html = markdown(self.body) if self.excerpt: self.excerpt_html = markdown( self.excerpt ) super( Entry, self ).save( force_insert, force_update ) class Meta: ordering = ['-pub_date'] verbose_name_plural = "Entries" def __unicode__(self): return self.title Edit #2: To clarify I did move the media files to my media path and they are indeed rendering the image area, I can upload fine, the <<<image>>> tag is inserted into my editable MarkItUp w/ Markdown area but it isn't rendering in the MarkItUp preview - perhaps I just need to apply the |upload_tags into that preview. I'll try adding it to my template which posts the article as well.

    Read the article

  • Eventlet or gevent or Stackless + Twisted, Pylons, Django and SQL Alchemy

    - by Khorkrak
    We're using Twisted extensively for apps requiring a great deal of asynchronous io. There are some cases where stuff is cpu bound instead and for that we spawn a pool of processes to do the work and have a system for managing these across multiple servers as well - all done in Twisted. Works great. The problem is that it's hard to bring new team members up to speed. Writing asynchronous code in Twisted requires a near vertical learning curve. It's as if humans just don't think that way naturally. We're considering a mixed approach perhaps. Maybe do the xmlrpc server part and process management in Twisted still but the other stuff in code that at least looks synchronous while not being as such. Then again I like explicit over implicit so hmmm. Anyway onto greenlets - how well does that stuff work? So there's Stackless and as you can see from my Gallentean avatar I'm well aware of the tremendous success in it's use for CCP's flagship EVE Online game first hand. What about Eventlet or gevent? Well for now only Eventlet works with Twisted. However gevent claims to be faster since it's not a pure python implementation it instead uses libevent. It also has fewer idiosyncrasies and defects supposedly. The documentation there is minimal in comparison to Eventlet and it's maintained by 1 guy as far as I can tell. This makes me leery but all great projects start this way so... Then there's PyPy - I haven't even finished reading about that one yet - just saw it in this thread: Drawbacks of Stackless. So confusing - I'm wondering what the heck to do - sounds like Eventlet is probably the best bet but is it really stable enough? Anyone out there have any experience with it? Should we go with Stackless instead as it's been around and is proven technology - just like Twisted is as well - and they do work together nicely. But still I hate having to have a separate version of Python to do this. what to do.... This somewhat obnoxious blog entry hit the nail on the head for me though: Asynchronous IO for Grownups We're stuck using MySQL as well - I never knew how great PostgreSQL was until having had to work on a production OLTP system in MySQL instead - but that's another story. But if that monkey patch thing really works then wow. Just wow.

    Read the article

  • Existing web-site CSS replacement (re-skinning) best-practices without changing the HTML

    - by Enigmativity
    I can see a number of other good answers to questions relating to CSS best-practices on stack overflow: How to Manage CSS Explosion CSS Conventions / Code Layout Models Are there any CSS standards that I should follow while writing my first stylesheet? What is the best method for tidying CSS? Best Practices - CSS Stylesheet Formatting But I think I have a different problem. I'm trying to "re-skin" an existing site that has been nicely built using div's and ul's, etc, and it has a good existing CSS file, but when I start making changes to the CSS I quickly find that I break the layout. My feeling is that it is very hard to get a feel for how all the CSS will work together and indeed what CSS is affecting parent and sibling elements in the HTML. So, my question is "what are the best-practices around re-skinning an existing web-site by replacing the CSS only and not modifying the existing HTML?" I can't change the classes, ids, node hierarchy, etc. An example of the particular site that I am trying to re-skin is http://demo.nopcommerce.com/. The existing CSS can be as complicated/detailed as this extract from the main CSS file: .header-selectors-wrapper { text-align: right; float: right; width: 500px; } .header-currencyselector { float: right; } .header-languageselector { float: left; } .header-taxDisplayTypeSelector { float: right; } .header-links-wrapper { float: right; text-align: right; width: 570px; } .header-links { border: solid 1px #9a9a9a; padding: 5px 5px 5px 5px; margin-bottom: 5px; display: inline-table; } .order-summary-content .cart .cart-item-row td, .wishlist-content .cart .cart-item-row td { border-bottom: 1px solid #c5c5c5; vertical-align: middle; line-height: 30px; } .order-summary-content .cart .cart-item-row td.product, .wishlist-content .cart .cart-item-row td.product { text-align: left; padding: 0px 10px 0px 10px; } .order-summary-content .cart .cart-item-row td.product a, .wishlist-content .cart .cart-item-row td.product a { font-weight: bold; } Any help would be appreciated.

    Read the article

  • Unable to retrieve information form HP-UX pst_status object

    - by bogertron
    I am attempting to get process information by using the HP-UX C/C++ library. After scouring the internet, I have discovered that HP-UX has the pstat.h header file which allows programmers to retrieve the process information. After attempting to understand code from the internet hp website, I attempted to create a little test sample to comprehend what the code does. I attempted to use example 3, however, I ran into several issues. The first issue came when I attempted to execute the following line of code: (void)printf("pid is %d, command is %s\n", pst[i].pst_pid, pst[i].pst_ucomm); When I attempted to print the string, I hit a memory fault. So I decided to attempt to see what the string is and came up with the following: #include <sys/param.h> #include <sys/pstat.h> #include <sys/unistd.h> #include <string.h> int main(int argc, char** argv) { #define BURST ((size_t)10) struct pst_status pst[BURST]; int i, count; int idx = 0; /* index within the context */ int index = 0; /* loop until count == 0, will occur all have been returned */ while ((count=pstat_getproc(pst, sizeof(pst[0]),BURST,idx))>0) { index = 0; printf("index: %d", index); /* got count (max of BURST) this time. process them */ while (pst[i].pst_ucomm[index] != '\0') { printf("%c", pst[i].pst_ucomm[index]); index++; } printf("\n"); for (i = 0; i < count; i++) { printf("pid is %d, command is \n", pst[i].pst_pid); } /* * now go back and do it again, using the next index after * the current 'burst' */ idx = pst[count-1].pst_idx + 1; } if (count == -1) perror("pstat_getproc()"); #undef BURST } Unfortunately, what happens is that I get the first process printed, then pid is 2, command is pid is 2, command is pid is 2, command is... I know that I must be doing something foolish since my C/C++ skills are not that great, but I cannot figure out what the issue is since the code is largely copied from the hp website. So here's the question(s) for clarity: 1. Why can't printf("%s", pst[i].pst_ucomm); handle strings? 2. Why can't I iterate over the processes in the system? Any help is greatly appreciated.

    Read the article

  • Properly clean up excel interop objects revisited: Wrapper objects

    - by chiccodoro
    Hi all, Excel 2007 Hangs When Closing via .NET How to properly clean up Excel interop objects in C# How to properly clean up interop objects in C# All of these struggle with the problem that C# does not release the Excel COM objects properly after using them. There are mainly two directions of working around this issue: Kill the Excel process when Excel is not used anymore. Take care to assign each COM object used explicitly to a variable and to Marshal.ReleaseComObject all of these. Some have stated that 2 is too tedious and there is always some uncertainty whether you forget to stick to this rule at some places in the code. Still 1 seems dirty and dangerous to me, also I could imagine that in an environment with restricted access killing processes is not allowed. So I've been thinking about solving 2 by creating another proxy object model which mimics the Excel object model (for me, it would suffice to implement the objects I actually need). The principle would look as follows: Each Excel Interop class has its proxy which wraps an object of that class. The proxy releases the COM object in its destructor. The proxy mimics the interface of the Interop class (maybe by inheriting it). Any methods that usually return another COM object return a proxy instead. The other methods simply delegate the implementation to the inner COM object. This is a rough sketch of the code: public class Application : Microsoft.Office.Interop.Excel.Application { private Microsoft.Office.Interop.Excel.Application innerApplication = new Microsoft.Office.Interop.Excel.Application innerApplication(); ~Application() { Marshal.ReleaseCOMObject(innerApplication); } public Workbooks Workbooks { get { return new Workbooks(innerApplication.Workbooks); } } } public class Workbooks { private Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks; Workbooks(Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks) { this.innerWorkbooks = innerWorkbooks; } ~Workbooks() { Marshal.ReleaseCOMObject(innerWorkbooks); } } My questions to you are in particular: Who finds this a bad idea and why? Who finds this a gread idea? If so, why hasn't anybody implemented/published such a model yet? Just due to the effort, or am I missing a killing problem with that idea? Is it impossible/bad/dangerous to do the ReleaseCOMObject in the destructor? (I've only seen proposals to put it in a Dispose() rather than in a destructor - why?) If the approach makes sense, any suggestions to improve it?

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • Sharing a UIView between UIViewControllers in a UITabBarController

    - by Wireless Designs
    Hi all - I have a UIScrollView that houses a gallery of images the user can scroll through. This view needs to be visible on each of three separate UIViewControllers that are housed within a UITabBarController. Right now, I have three separate UIScrollView instances in the UITabBarController subclass, and the controller manages keeping the three synchronized (when a user scrolls the one they can see, programmatically scrolling the other two to match, etc.), which is not ideal. I would like to know if there is a way to work with only ONE instance of the UIScrollView, but have it show up only in the UIViewController that the user is currently interacting with. This would completely eliminate all the synchronization code. Here is basically what I have now in the UITabBarController (which is where all this is currently managed): @interface ScrollerTabBarController : UITabBarController { FirstViewController *firstView; SecondViewController *secondView; ThirdViewController *thirdView; UIScrollView *scrollerOne; UIScrollView *scrollerTwo; UIScrollView *scrollerThree; } @property (nonatomic,retain) IBOutlet FirstViewController *firstView; @property (nonatomic,retain) IBOutlet SecondViewController *secondView; @property (nonatomic,retain) IBOutlet ThirdViewController *thirdView; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerOne; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerTwo; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerThree; @end @implementation ScrollerTabBarController - (void)layoutScroller:(UIScrollView *)scroller {} - (void)scrollToMatch:(UIScrollView *)scroller {} - (void)viewDidLoad { [self layoutScroller:scrollerOne]; [self layoutScroller:scrollerTwo]; [self layoutScroller:scrollerThree]; [scrollerOne setDelegate:self]; [scrollerTwo setDelegate:self]; [scrollerThree setDelegate:self]; [firstView setGallery:scrollerOne]; [secondView setGallery:scrollerTwo]; [thirdView setGallery:scrollerThree]; } - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView { [self scrollToMatch:scrollView]; } @end The UITabBarController gets notified (as the scroll view's delegate) when the user scrolls one of the instances, and then calls methods like scrollToMatch: to sync up the other two with the user's choice. Is there something that can be done, using a many-to-one relationship on IBOutlet or something like that, to narrow this down to one instance so I'm not having to manage three scroll views? I tried keeping a single instance and moving the pointer from one view to the next using the UITabBarControllerDelegate methods (calling setGallery:nil on the current and setGallery:scrollerOne on the next each time it changed), but the scroller never moved to the other tabs. Thanks in advance!

    Read the article

  • Log4r : logger inheritance, yaml configuration, alternatives ?

    - by devlearn
    Hello, I'm pretty new to ruby environments and I was looking for a nice logging framework to use it my ruby and rails applications. In my previous experiences I have successfully used log4j and log4p (the perl port) and was expecting the same level of usability (and maturity) with log4r. However I must say that there are a number of things that are not clear at all in the log4r framework. 1 Logger Inheritance The logger inheritance does not seem to be managed at all ! If I declare a logger named 'myapp' and then try to get a logger name 'myapp::engine', the lookup will end with a NameError. I would expect that the framework returns the root logger according to the naming scheme and to use the 'myapp' logger. Q1 : Of course I can work around this and manage the names by myself with a lookup method, however is there a cleaner way to do this without any extra coding ? 2 YAML configuration Second thing that confuses me is the yaml configuration. On the log4r site there are literally no information about this system, the doc links forward to missing pages, so all the info I can find about is contained in the examples directory of the gem. I was pretty confused with the fact that the yaml configuration must contain the pre_config section, and that I need to define my own levels. If I remove the pre_config secion, or replace all the “custom” levels by the standard ones ( debug, info, warn, fatal ) , the example will throw the following error : log4r/yamlconfigurator.rb:68:in `decode_yaml': Log level must be in 0..7 (ArgumentError) So there seems to be no way of using a simple file where we only declare the loggers and appenders for the framework. Q2 : I realy think that I missed something and that must be a way of providing a simple yaml conf file. Do you have any examples of such an usage ? 3 Variables substitution in XML file Q3 : The Yaml configuration system seems to provide such a feature however I was unable to find a similar feature with XML files. Any ideas ? 4 Alternatives ? I must say that I'm very disappointed by the feature level and the maturity of log4r compared to the log4j and other log4j ports. I run into this framework with a solid background of logging APIs in other languages and find myself working around in all kinds just to make 'basic things' running in a “real world application”. By that I mean a complex application composed of several gems, console/scripting apps, and a rails web front end where the configuration must be mutualized and where we make intensive usage of namespaces and inheritance. I've run several searches in order to find something more suitable or mature, but did not find anything similar. Q4 : Do you guys know any (serious) alternatives to log4r framework that could be used in a enterprise class app ? Thanks reading all of this ! I'd really appreciate any pointers, Kind Regards,

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • General advice and guidelines on how to properly override object.GetHashCode()

    - by Svish
    According to MSDN, a hash function must have the following properties: If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values. The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again. For the best performance, a hash function must generate a random distribution for all input. I keep finding myself in the following scenario: I have created a class, implemented IEquatable<T> and overridden object.Equals(object). MSDN states that: Types that override Equals must also override GetHashCode ; otherwise, Hashtable might not work correctly. And then it usually stops up a bit for me. Because, how do you properly override object.GetHashCode()? Never really know where to start, and it seems to be a lot of pitfalls. Here at StackOverflow, there are quite a few questions related to GetHashCode overriding, but most of them seems to be on quite particular cases and specific issues. So, therefore I would like to get a good compilation here. An overview with general advice and guidelines. What to do, what not to do, common pitfalls, where to start, etc. I would like it to be especially directed at C#, but I would think it will work kind of the same way for other .NET languages as well(?). I think maybe the best way is to create one answer per topic with a quick and short answer first (close to one-liner if at all possible), then maybe some more information and end with related questions, discussions, blog posts, etc., if there are any. I can then create one post as the accepted answer (to get it on top) with just a "table of contents". Try to keep it short and concise. And don't just link to other questions and blog posts. Try to take the essence of them and then rather link to source (especially since the source could disappear. Also, please try to edit and improve answers instead of created lots of very similar ones. I am not a very good technical writer, but I will at least try to format answers so they look alike, create the table of contents, etc. I will also try to search up some of the related questions here at SO that answers parts of these and maybe pull out the essence of the ones I can manage. But since I am not very stable on this topic, I will try to stay away for the most part :p

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • How to login to wordpress programmatically?

    - by T-Rex
    I need to perform some action in wordpress admin panel programmatically but can't manage how to login to Wordpress using C# and HttpWebRequest. Here is what I do: private void button1_Click(object sender, EventArgs e) { string url = "http://localhost/wordpress/wp-login.php"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); CookieContainer cookies = new CookieContainer(); SetupRequest(url, request, cookies); //request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; //request.Headers["Accept-Language"] = "uk,ru;q=0.8,en-us;q=0.5,en;q=0.3"; //request.Headers["Accept-Encoding"] = "gzip,deflate"; //request.Headers["Accept-Charset"] = "windows-1251,utf-8;q=0.7,*;q=0.7"; string user = "test"; string pwd = "test"; request.Credentials = new NetworkCredential(user, pwd); string data = string.Format( "log={0}&pwd={1}&wp-submit={2}&testcookie=1&redirect_to={3}", user, pwd, System.Web.HttpUtility.UrlEncode("Log In"), System.Web.HttpUtility.UrlEncode("http://localhost/wordpress/wp-admin/")); SetRequestData(request, data); ShowResponse(request); } private static void SetupRequest(string url, HttpWebRequest request, CookieContainer cookies) { request.CookieContainer = cookies; request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.0; uk; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729)"; request.KeepAlive = true; request.Timeout = 120000; request.Method = "POST"; request.Referer = url; request.ContentType = "application/x-www-form-urlencoded"; } private void ShowResponse(HttpWebRequest request) { HttpWebResponse response = (HttpWebResponse)request.GetResponse(); responseTextBox.Text = (((HttpWebResponse)response).StatusDescription); responseTextBox.Text += "\r\n"; StreamReader reader = new StreamReader(response.GetResponseStream()); responseTextBox.Text += reader.ReadToEnd(); } private static void SetRequestData(HttpWebRequest request, string data) { byte[] streamData = Encoding.ASCII.GetBytes(data); request.ContentLength = streamData.Length; Stream dataStream = request.GetRequestStream(); dataStream.Write(streamData, 0, streamData.Length); dataStream.Close(); } But unfortunately in responce I get only HTML source code of login page and it seems that cookies don't contain session ID. All requests which I perform after that code also return HTML source of login page so I can assume that it does not login correctly. Can anybody help me to solve that problem or give working example? Main thing which I want to achieve is scanning for new images in Nextgen Gallery plugin for Wordpress. Is there XML-RPC way of doing that? Thanks in advance.

    Read the article

  • How to override PHP configuration when running in CGI mode

    - by Fitrah M
    There are some tutorials out there telling me how to override PHP configuration when it is running in CGI mode. But I'm still confused because lots of them assume that the server is running on Linux. While I need to do that also in Windows. My hosting is indeed using Linux but my local development computer is using Windows XP with Xampp 1.7.3. So I need to do that in my local computer first, then I want to change the configuration on hosting server. The PHP in my hosting server is already run as CGI while in my local computer still run as Apache module. At this point, the processes that I understand are: Change PHP to work in CGI mode. I did this by commenting these two line in "httpd-xampp.conf": # LoadFile "C:/xampp/php/php5ts.dll" # LoadModule php5_module modules/php5apache2_2.dll Create "cgi-bin" directory in DocumentRoot. My DocumentRoot is in "D:\www\" (I'm using apache with virtual host). So it is now "D:\www\cgi-bin". Change the default "cgi-bin" directory settings from "C:/xampp/cgi-bin/" to "D:\www\cgi-bin": ScriptAlias /cgi-bin/ "D:/www/cgi-bin/" <Directory "D:\www\cgi-bin"> Options MultiViews Indexes SymLinksIfOwnerMatch Includes ExecCGI AllowOverride All Allow from All </Directory> At this point, my PHP is now running as CGI. I checked this with phpinfo(). It tells me that Server API is now CGI/FastCGI. Now I want to override php configuration. I copied 'php.ini' file to "D:\www\cgi-bin" and modify upload_max_filesize setting from 128M to 10M. Create 'php.cgi' file in "D:\www\cgi-bin" and put these code inside the file: #!/bin/sh /usr/local/cpanel/cgi-sys/php5 -c /home/user/public_html/cgi-bin/ That's it. I'm stuck at this point. All of tutorials tell me to create 'php.cgi' file and put shell code inside the file. How to do the 6th step on Windows? I know the next step is to create handler in .htaccess file to load that 'php.cgi'. And also, because I will also need to change PHP configuration on my hosting server (Linux), is the 6th step above right? Some tutorial tells to insert these line instead of above: #!/bin/sh export PHPRC=/site/ini/1 exec /cgi-bin/php5.cgi I'm sorry if my question is not clear. I'm a new member and this is my first question in this site. Thank you.

    Read the article

  • Facebook Developer ToolKit: How should I construct this app?

    - by j0nscalet
    I have created a simple desktop application that I want to use to post status updates for the users of my app. Here's the kicker though that I am having trouble figuring out, the desktop application runs as part of a batch process every night, in which I update the status of certain users. I use the following code to accomplish this: (comes directly from the FDK samples) public FriendViewer() { InitializeComponent(); facebookService1.ApplicationKey = "Key"; facebookService1.Secret = "Secret"; facebookService1.SessionKey = "Session key"; facebookService1.IsDesktopApplication = true; } private void TestService_Load(object sender, EventArgs e) { try { if (!facebookService1.API.users.hasAppPermission(facebook.Types.Enums.Extended_Permissions.status_update)) facebookService1.GetExtendedPermission(facebook.Types.Enums.Extended_Permissions.status_update); if (!facebookService1.API.users.hasAppPermission(facebook.Types.Enums.Extended_Permissions.offline_access)) facebookService1.GetExtendedPermission(facebook.Types.Enums.Extended_Permissions.offline_access); long uid = facebookService1.users.getLoggedInUser(); facebook.Schema.user user = facebookService1.users.getInfo(uid); facebookService1.users.setStatus("Facebook Syndicator rules!"); MessageBox.Show(String.Format("Status set for {0} {1}", user.first_name, user.last_name)); } catch (Exception ex) { MessageBox.Show(ex.Message); Close(); } } My user's day to day activity is done a website front end. Since I dont have any user interaction in a nightly batch process, I cannot use the ConnectToFaceBook method on the FaceBookService to obtain a sessionKey for the user. Ideally I would like to prompt for authorization and extended permissions for my desktop app when a user logins into the web front end then save the sessionKey and uid in the database. At night when my process runs, I would reference the sessionKey and uid in order and update the user's status. I am finding myself fumbling between whether or not my app should be a web or desktop app. Having both a web and desktop app would be confusing to my users, because they would have to grant/manage permissions for both apps. And I looking at this the wrong way? Any help would be greatly appreciated! Thanks.

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >