Search Results

Search found 4220 results on 169 pages for 'generating passwords'.

Page 109/169 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Getting an odd error, MSSQL Query using `WITH` clause

    - by Aren B
    The following query: WITH CteProductLookup(ProductId, oid) AS ( SELECT p.ProductID, p.oid FROM [dbo].[ME_CatalogProducts] p ) SELECT rel.Name as RelationshipName, pl.ProductId as FromProductId, pl2.ProductId as ToProductId FROM ( [dbo].[ME_CatalogRelationships] rel INNER JOIN CteProductLookup pl ON pl.oid = rel.from_oid ) INNER JOIN CteProductLookup pl2 ON pl2.oid = rel.to_oid WHERE rel.Name = 'BundleItem' AND pl.ProductId = 'MX12345'; Is generating this error: Msg 319, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. On execution only. There are no errors/warnings in the sql statement in the managment studio. Any ideas?

    Read the article

  • Caching generated QR Code

    - by Michal K
    I use zxing to encode a qr code and store it as a bitmap and then show it in ImageView. Since the image generation time is significant I'm planning to move it to a separate thread (AsyncTaskLoader will be fine I think). The problem is - it's an image and I know that to avoid memory leaks one should never store a strong reference to it in an Activity. So how would you do it? How to cache an image to survive config changes (phone rotation) and generally avoid generating it onCreate()? Just point me in the right direction, please.

    Read the article

  • Building a subquery with ARel in Rails3

    - by Christopher
    I am trying to build this query in ARel: SELECT FLOOR(AVG(num)) FROM ( SELECT COUNT(attendees.id) AS num, meetings.club_id FROM `meetings` INNER JOIN `attendees` ON `attendees`.`meeting_id` = `meetings`.`id` WHERE (`meetings`.club_id = 1) GROUP BY meetings.id) tmp GROUP BY tmp.club_id It returns the average number of attendees per meeting, per club. (a club has many meetings and a meeting has many attendees) So far I have (declared in class Club < ActiveRecord::Base): num_attendees = meetings.select("COUNT(attendees.id) AS num").joins(:attendees).group('meetings.id') Arel::Table.new('tmp', self.class.arel_engine).from(num_attendees).project('FLOOR(AVG(num))').group('tmp.club_id').to_sql but, I am getting the error: undefined method `visit_ActiveRecord_Relation' for #<Arel::Visitors::MySQL:0x9b42180> The documentation for generating non trivial ARel queries is a bit hard to come by. I have been using http://rdoc.info/github/rails/arel/master/frames Am I approaching this incorrectly? Or am I a few methods away from a solution?

    Read the article

  • Linking IronPython to WPF

    - by DonnyD
    I just installed VS2010 and the great new IronPython Tools extension. Currently this extension doesn't yet generate event handlers in code upon double-clicking wpf visual controls. Is there anyone that can provide or point me to an example as to how to code wpf event handlers manually in python. I've had no luck finding any and I am new to visual studio. Upon generating a new ipython wpf project the auto-generated code is: import clr clr.AddReference('PresentationFramework') from System.Windows.Markup import XamlReader from System.Windows import Application from System.IO import FileStream, FileMode app = Application() app.Run(XamlReader.Load(FileStream('WpfApplication7.xaml', FileMode.Open))) and the XAML is: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="WpfApplication7" Height="300" Width="300"> <Button>Click Me</Button> </Window> Any help would be appreciated.

    Read the article

  • The ctags command doesn't recurse saying "it is not a regular file".

    - by indiv
    When I run ctags -R *, I get errors saying that all directories are not regular files and it skips them instead of recursively generating tags for them. ctags: skipping arpa: it is not a regular file. ctags: skipping asm: it is not a regular file. ctags: skipping asm-generic: it is not a regular file. ctags: skipping bits: it is not a regular file. ctags: skipping blkid: it is not a regular file. ctags: skipping boost: it is not a regular file. What is the problem?

    Read the article

  • Using .htaccess to replace backslash in URL with forward-slash

    - by DamienL
    I realise that a backslash should never appear in a URL in a form other than a URL escape code, however in this case the URL's are being generated by a .NET application for generating flashbooks. I have contacted the developer of this application with a bug report. In the interim i would like to use .htaccess to rewrite the offending backslashes. This is how the URLs appear in fiddler debugging proxy. www.example.com/folder/folder/thumbs%5C1.jpg I am using Firefox and it looks as though Firefox is translating them into the URL encoded equivalent ( \ == %5C1 ). Interestingly IE translates the backslash into a forward-slash automatically (not adhering to standards but convenient in this case). Is there a way to use .htaccess to rewrite all \ to /?

    Read the article

  • CPU and Data alignment

    - by MS
    Dear All, Pardon me if you feel this has been answered numerous times, but I need answers to the following queries! Why data has to be aligned (on 4 byte/ 8 byte/ 2 byte boundaries)? Here my doubt is when the CPU has address lines Ax Ax-1 Ax-2 ... A2 A1 A0 then it is quite possible to address the memory locations sequentially. So why there is the need to align the data at specific boundaries? How to find the alignment requirements when I am compiling my code and generating the executatble? If for e.g the data alignment is 4 byte boundary, does that mean each consecutive byte is located at modulo 4 offsets? My doubt is if data is 4 byte aligned does that mean that if a byte is at 1004 then the next byte is at 1008 (or at 1005)? Your thoughts are much welcome. Thanks in advance! /MS

    Read the article

  • java.io.IOException: Invalid argument

    - by Luixv
    Hi I have a web application running in cluster mode with a load balancer. It consists in two tomcats (T1, and T2) addressing only one DB. T2 is nfs mounted to T1. This is the only dofference between both nodes. I have a java method generating some files. If the request runs on T1 there is no problem but if the request is running on node 2 I get an exception as follows: java.io.IOException: Invalid argument at java.io.FileOutputStream.close0(Native Method) at java.io.FileOutputStream.close(FileOutputStream.java:279) The corresponding code is as follows: for (int i = 0; i < dataFileList.size(); i++) { outputFileName = outputFolder + fileNameList.get(i); FileOutputStream fileOut = new FileOutputStream(outputFileName); fileOut.write(dataFileList.get(i), 0, dataFileList.get(i).length); fileOut.flush(); fileOut.close(); } The exception appears at the fileOut.close() Any hint? Luis

    Read the article

  • javascript increment index in funciton

    - by mcgrailm
    I'm just trying to figure out how to create an index inside a function so I can keep track of the items its generating and I'm not sure how to do this seems like I should know this.. addLocation(options,location) funciton addLocation(options,location){ $('#list').append('<li class="location"><div>'+location.label+'</div>'+ '<input type="hidden" name="location['+x+'][lat]" value="'+location.lat+'" />'+ '<input type="hidden" name="location['+x+'][lon]" value="'+location.lon+'" />'+ '<input type="hidden" name="location['+x+'][loc]" value="'+location.loc+'" />'+ '<input type="hidden" name="location['+x+'][label]" value="'+location.label+'" />'+ '</li>'); } now normally you have a forloop to help you keep track of things but int this instance I'm not appending items in a loop and I keep getting an error saying x in undefined appreciate any help you can give thanks Mike

    Read the article

  • Stack trace in website project, when debug = false

    - by chandmk
    We have a website project. We are logging unhanded exceptions via a appdomain level exception handler. When we set debug= true in web.config, the exception log is showing the offending line numbers in the stack trace. But when we set debug = false, in web.config, log is not displaying the line numbers. We are not in a position to convert the website project in to webapplication project type at this time. Its legacy application and almost all the code is in aspx pages. We also need to leave the project in 'updatable' mode. i.e. We can't user pre-compile option. We are generating pdb files. Is there anyway to tell this kind of website projects to generate the pdb files, and show the line numbers in the stack trace?

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Lack of IsNumeric function in C#

    - by Michael Kniskern
    One thing that has bothered me about C# since its release was the lack of a generic IsNumeric function. I know it is difficult to generate a one-stop solution to detrmine if a value is numeric. I have used the following solution in the past, but it is not the best practice because I am generating an exception to determine if the value is IsNumeric: public bool IsNumeric(string input) { try { int.Parse(input); return true; } catch { return false; } } Is this still the best way to approach this problem or is there a more efficient way to determine if a value is numeric in C#?

    Read the article

  • Generate EF4 POCO classes first time only

    - by Jaxidian
    The problem I'm having is, using the POCO templates, generating my POCO classes the first time only and not overwriting them when the templates are re-ran. I know this sounds hokey and the reason is that I'm actually changing these templates and trying to generate metadata classes rather than the actual POCO classes, but these metadata classes will be hand-edited and I want to keep those edits in the future but still regenerate a certain amount of it. I have it all working exactly as I want except for the regeneration of the files. I have looked into T4 and it seems that there is a flag to do just this (see the Output.PreserveExistingFile property) but I don't understand where/how to set this flag. If you can tell me where/how to set this in the default POCO templates, then I think that's all I really need. Thanks!! :-)

    Read the article

  • How do I generate different yyparse functions from lex/yacc for use in the same program?

    - by th
    Hi, I want to generate two separate parsing functions from lex/yacc. Normally yacc gives you a function yyparse() that you can call when you need to do some parsing, but I need to have several different yyparses each associated with different lexers and grammars. The man page seems to suggest the -p (prefix) flag, but this didn't work for me. I got errors from gcc that indicated that yylval was not properly being relabeled (i.e. it claims that several different tokens are not defined). Does anyone know the general rpocedure for generating these separate functions? thanks

    Read the article

  • Generate MVC HTml.ActionLink programatically

    - by swetha
    I have a scenario where I should redirect my Legacy Url to my current SEO Url. and I have 300+ different legacy urls so I am handling all those in my 404 Handler. I have my Legacy Url and the actual Action, controller and parameters it is supposed to use. But the issue is there is already a SEO friendly route for these Action and Controller. so I want to redirect this legacy URL to that friendly route. So I am planning to generate a link like what Html.ActionLink does and want to move permanently. Can anyone help me in generating this link???

    Read the article

  • Can I suppress newlines with Django's template engine?

    - by ento
    In Rails ERB, you can suppress newlines by adding a trailing hyphen to tags: <ul> <% for @item in @items -%> <li><%= @item %></li> <% end -%> </ul> becomes: <ul> <li>apple</li> <li>banana</li> <li>cacao</li> </ul> Is there a way to do this in Django? (Disclosure: I'm generating a csv file with Django)

    Read the article

  • What is the fastest way to compare two list of items?

    - by edude05
    I have two folders with approximately 10,000 files each. I'd like to write a script or program that can tell me if these folders are in sync and then tell me what files are missing from each to make them in sync. Therefore, after generating a list of files, what is the fastest algorithm to sort them for unique files? What I'm thinking right now is comparing the first file on each list then if they are different remove one until they are the same, then remove both from the list (because they are not unique.) Is there a faster algorithm then this?

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • R: Creating Custom Shapes with ggplot

    - by Brandon Bertelsen
    Full Disclosure: This was also posted to the ggplot2 mailing list. (I'll update if I receive a response) I'm a bit lost on this one, I've tried messing around with geom_polygon but successive attempts seem worse than the previous. The image that I'm trying to recreate is this, the colours are unimportant, but the positions are: In addition to creating this, I also need to be able to label each element with text. At this point, I'm not expecting a solution (although that would be ideal) but pointers or similar examples would be immensely helpful. One option that I played with was hacking scale_shape and using 1,1 as coords. But was stuck with being able to add labels. The reason I'm doing this with ggplot, is because I'm generating scorecards on a company by company basis. This is only one plot in a 4 x 10 grid of other plots (using pushViewport) Note: The top tier of the pyramid could also be a rectangle of similar size.

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • how to organize javascripts using rails and jquery

    - by VP
    Hi, i'm working in a big and rich rails web application using tons of javascript. I would like to know if anybody has a tip to organize the javascripts. Today i'm generating a new file named controller.js and adding it to my views using content_for. The problem is, some files are becoming big and sometimes, i need a function from one controller in another, so then in the end, i add a products.js to a details controller just to keep DRY. Is that solution good? Any other tip? I think the same pattern can be applied as well to css files?

    Read the article

  • How to debug a corrupt pdf file?

    - by Joelio
    Hi, im generating pdf files using a ruby library called "prawn". I have one particular file that seems to be considered "Corrupt" by adobe reader. It shows up fine in both preview and in adobe reader. It gives errors like: Sometimes I get: "Could not find the XObject named '%s'. Othertimes I get: "Could not find the XObject named "Im4". Then always I get: "An error exists on this page. Acrobat may not display the page correctly. Please contact the person who created the PDF document to correct the problem." Is there a way to open a pdf with some tool and have it tell you what is technically wrong with the pdf? Im sure I could figure it out quickly with something like this... thanks Joel

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • Dynamic Spacer in ReportLab

    - by ptikobj
    I'm automatically generating a PDF-file with Platypus that has dynamic content. This means that it might happen that the length of the text content (which is directly at the bottom of the pdf-file) may vary. However, it might happen that a page break is done in cases where the content is too long. This is because i use a "static" spacer: s = Spacer(width=0, height=23.5*cm) as i always want to have only one page, I somehow need to dynamically set the height of the Spacer, so that it takes the "rest" of the space that is on the page as its height. Now, how do i get the "rest" of height that is left on my page?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >