Search Results

Search found 12019 results on 481 pages for 'stop execution'.

Page 49/481 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Stop a user directly accesing a page

    - by James Jeffery
    I have a page called create.php. It receives post variables and sets up accounts. I don't want that page to be accessible by a user. What's the conventional way of achieving this? I think I remember reading something about including a page with a CONSTANT. If the CONSTANT is not present the page has been accessed directly. I think Wordpress also do it.

    Read the article

  • Z600 Workstation ACPI Fan Noise

    - by dpb
    Hi -- I have an HP z600 workstation that has the FAN running full when idle. In fact, after the boot, the fan never slows down or varies. I looked in dmesg, and noticed this: [ 1.516778] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.516781] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.516786] ACPI: Marking method _OSC as Serialized because of AE_ALREADY_EXISTS error [ 1.519868] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.519872] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624638] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624642] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624726] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624729] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624802] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624805] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624895] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624898] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624977] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624981] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.625070] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.625074] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.625153] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.625157] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS Anyone know what could be done to fix this?

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • Presenting Designing an SSIS Execution Framework to Steel City SQL 18 Jan 2011!

    - by andyleonard
    I'm honored to present Designing an SSIS Execution Framework (Level 300) to Steel City SQL - the Birmingham Alabama chapter of PASS - on 18 Jan 2011! The meeting starts at 6:00 PM 18 Jan 2011 and will be held at: New Horizons Computer Learning Center 601 Beacon Pkwy. West Suite 106 Birmingham, Alabama, 35209 ( Map for directions ) Abstract In this “demo-tastic” presentation, SSIS trainer, author, and consultant Andy Leonard explains the what, why, and how of an SSIS framework that delivers metadata-driven...(read more)

    Read the article

  • What good books are out there on program execution models? [on hold]

    - by murungu
    Can anyone out there name a few books that address the topic of program execution models?? I want a book that can answer questions such as... What is the difference between interpreted and compiled languages and what are the performance consequences at runtime?? What is the difference between lazy evaluation, eager evaluation and short circuit evaluation?? Why would one choose to use one evaluation strategy over another?? How do you simulate lazy evaluation in a language that favours eager evaluation??

    Read the article

  • Une nouvelle faille critique dans GnuTLS permet l'exécution du code malveillant, les correctifs doivent être appliqués d'urgence

    Une nouvelle faille critique dans GnuTLS permet l'exécution du code malveillant les correctifs doivent être appliqués d'urgenceAprès la faille Heartbleed d'OpenSSL, l'écosystème de la sécurité sur internet est à nouveau touché par une autre faille importante dans un outil de chiffrement open source.Des chercheurs de Codenomicon, la firme à l'origine de l'identification de la faille dans OpenSSL, ont découvert une vulnérabilité critique dans GnuTLS, une bibliothèque populaire pour la gestion des...

    Read the article

  • Firefox 30 sort en version stable et désactive par défaut l'exécution des plugins, la version Android également disponible

    Firefox 30 sort en version stable et désactive par défaut l'exécution des plugins, la version Android également disponibleMozilla met à la disposition des utilisateurs, une nouvelle version de son navigateur Firefox. Contrairement à la version 29 qui était sortie avec un lot de nouveautés, notamment sa nouvelle interface utilisateur Australis, Firefox 30 représente une mise à jour mineure.Tout comme Google avec Chrome, Mozilla prend également des distances avec les plugins qui représentent (ceux...

    Read the article

  • Chrome 31 : paiement Web, exécution d'applications natives et améliorations de la sécurité, le navigateur sort en version stable

    Chrome 31 : paiement Web, exécution d'applications natives et améliorations de la sécurité le navigateur sort en version stableChrome 31 est disponible en version stable pour Windows, Mac et Linux.Cette nouvelle mouture du navigateur de Google vante essentiellement sa nouvelle fonctionnalité qui permet de remplier les formulaires en ligne avec le moins d'effort possible. Les développeurs peuvent désormais accéder de façon pragmatique aux données distantes stockées localement côté client, grâce...

    Read the article

  • SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo

    - by pinaldave
    Earlier I wrote a blog post about how to install DQS in SQL Server 2012. Today I decided to write a second part of this series where I explain how to use DQS, however, as soon as I started the DQS client, I encountered an error that will not let me pass through and connect with DQS client. It was a bit strange to me as everything was functioning very well when I left it last time.  The error was very big but here are the first few words of it. Cannot connect to server. A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions”: System.Data.SqlClient.SqlException (0×80131904): A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessionPhaseTwo”: The error continues – here is the quick screenshot of the error. As my initial attempts could not fix the error I decided to search online and I finally received a wonderful solution from Microsoft Site. The error has happened due to latest update I had installed on .NET Framework 4. There was a  mismatch between the Module Version IDs (MVIDs) of the SQL Common Language Runtime (SQLCLR) assemblies in the SQL Server 2012 database and the Global Assembly Cache (GAC). This mismatch was to be resolved for the DQS to work properly. The workaround is specified here in detail. Scroll to subtopic 4.23 Some .NET Framework 4 Updates Might Cause DQS to Fail. The script was very much straight forward. Here are the few things to not to miss while applying workaround. Make sure DQS client is properly closed The NETAssemblies is based on your OS. NETAssemblies for 64 bit machine – which is my machine is “c:\windows\Microsoft.NET\Framework64\v4.0.30319″. If you have Winodws installed on any other drive other than c:\windows do not forget to change that in the above path. Additionally if you have 32 bit version installed on c:\windows you should use path as ”c:\windows\Microsoft.NET\Framework\v4.0.30319″ Make sure that you execute the script specified in 4.23 sections in this article in the database DQS_MAIN. Do not run this in the master database as this will not fix your error. Do not forget to restart your SQL Services once above script has been executed. Once you open the client it will work this time. Here is the script which I have bit modified from original script. I strongly suggest that you use original script mentioned 4.23 sections. However, this one is customized my own machine. /* Original source: http://bit.ly/PXX4NE (Technet) Modifications: -- Added Database context -- Added environment variable @NETAssemblies -- Main script modified to use @NETAssemblies */ USE DQS_MAIN GO BEGIN -- Set your environment variable -- assumption - Windows is installed in c:\windows folder DECLARE @NETAssemblies NVARCHAR(200) -- For 64 bit uncomment following line SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework64\v4.0.30319\' -- For 32 bit uncomment following line -- SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework\v4.0.30319\' DECLARE @AssemblyName NVARCHAR(200), @RefreshCmd NVARCHAR(200), @ErrMsg NVARCHAR(200) DECLARE ASSEMBLY_CURSOR CURSOR FOR SELECT name AS NAME FROM sys.assemblies WHERE name NOT LIKE '%ssdqs%' AND name NOT LIKE '%microsoft.sqlserver.types%' AND name NOT LIKE '%practices%' AND name NOT LIKE '%office%' AND name NOT LIKE '%stdole%' AND name NOT LIKE '%Microsoft.Vbe.Interop%' OPEN ASSEMBLY_CURSOR FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY SET @RefreshCmd = 'ALTER ASSEMBLY [' + @AssemblyName + '] FROM ''' + @NETAssemblies + @AssemblyName + '.dll' + ''' WITH PERMISSION_SET = UNSAFE' EXEC sp_executesql @RefreshCmd PRINT 'Successfully upgraded assembly ''' + @AssemblyName + '''' END TRY BEGIN CATCH IF ERROR_NUMBER() != 6285 BEGIN SET @ErrMsg = ERROR_MESSAGE() PRINT 'Failed refreshing assembly ' + @AssemblyName + '. Error message: ' + @ErrMsg END END CATCH FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName END CLOSE ASSEMBLY_CURSOR DEALLOCATE ASSEMBLY_CURSOR END GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • How do i stop or turn off the x-server?

    - by Alex
    im trying to do this tutorial: http://wiki.accelereyes.com/wiki/index.php/Installing_CUDA_Under_Ubuntu_10.04 I need the command that would completely stop/turn off the x-server. when i try to install the nvidia developer driver, i get a blue screen telling my Error:cant install with x-server running, please turn it off (something like that). "sudo service gdm stop" worked at the time i guess, (didnt give any errors) but the x-server is still running. is this the command i should be using? thanks for anyhelp!

    Read the article

  • Does having a Google "stop word" in a domain name have less SEO benefit than not having it?

    - by Dan
    Let me explain. Let's say my keyword I want to optimize is "green giraffes". But the domain greengiraffes.com (singular, plural, no hyphen, hyphen, etc.) is not available. I know that the search results for "green giraffes" and "about green giraffes" are essentially the same because "about" is a "stop word". Does that therefore also mean that the domain name "aboutgreengiraffes.com" is as good as "greengiraffes.com" in terms of SEO value? Are all stop words equal in that regard, or a shorter one (such as "e" or "z") is better?

    Read the article

  • Is there a way to delay compilation of a stored procedure's execution plan?

    - by Ian Henry
    (At first glance this may look like a duplicate of http://stackoverflow.com/questions/421275 or http://stackoverflow.com/questions/414336, but my actual question is a bit different) Alright, this one's had me stumped for a few hours. My example here is ridiculously abstracted, so I doubt it will be possible to recreate locally, but it provides context for my question (Also, I'm running SQL Server 2005). I have a stored procedure with basically two steps, constructing a temp table, populating it with very few rows, and then querying a very large table joining against that temp table. It has multiple parameters, but the most relevant is a datetime "@MinDate." Essentially: create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @MinDate If I just execute this as a normal query, by declaring @MinDate as a local variable and running that, it produces an optimal execution plan that executes very quickly (first joins on #smallTable and then only considers a very small subset of rows from aGiantTable while doing other operations). It seems to realize that #smallTable is tiny, so it would be efficient to start with it. This is good. However, if I make that a stored procedure with @MinDate as a parameter, it produces a completely inefficient execution plan. (I am recompiling it each time, so it's not a bad cached plan...at least, I sure hope it's not) But here's where it gets weird. If I change the proc to the following: declare @LocalMinDate datetime set @LocalMinDate = @MinDate --where @MinDate is still a parameter create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @LocalMinDate Then it gives me the efficient plan! So my theory is this: when executing as a plain query (not as a stored procedure), it waits to construct the execution plan for the expensive query until the last minute, so the query optimizer knows that #smallTable is small and uses that information to give the efficient plan. But when executing as a stored procedure, it creates the entire execution plan at once, thus it can't use this bit of information to optimize the plan. But why does using the locally declared variables change this? Why does that delay the creation of the execution plan? Is that actually what's happening? If so, is there a way to force delayed compilation (if that indeed is what's going on here) even when not using local variables in this way? More generally, does anyone have sources on when the execution plan is created for each step of a stored procedure? Googling hasn't provided any helpful information, but I don't think I'm looking for the right thing. Or is my theory just completely unfounded? Edit: Since posting, I've learned of parameter sniffing, and I assume this is what's causing the execution plan to compile prematurely (unless stored procedures indeed compile all at once), so my question remains -- can you force the delay? Or disable the sniffing entirely? The question is academic, since I can force a more efficient plan by replacing the select * from aGiantTable with select * from (select * from aGiantTable where ID in (select ID from #smallTable)) as aGiantTable Or just sucking it up and masking the parameters, but still, this inconsistency has me pretty curious.

    Read the article

  • Is there a way to configure timeout for speculative execution in Hadoop?

    - by S.O.
    I have hadoop job with tasks that are expected to run for significant length of fime (few minues). However hadoop starts speculative execution too soon. I do not want to turn speculative execution completely off but I want to increase duration of time hadoop waits before considering job for speculative execution. Is there a config option to control this timeout? Thanks

    Read the article

  • Using IF in T-SQL weakens or breaks execution plan caching?

    - by AnthonyWJones
    It has been suggest to me that the use of IF statements in t-SQL batches is detrimental to performance. I'm trying to find some confirmation of this assertion. I'm using SQL Server 2005 and 2008. The assertion is that with the following batch:- IF @parameter = 0 BEGIN SELECT ... something END ELSE BEGIN SELECT ... something else END SQL Server cannot re-use the execution plan generated because the next execution may need a different branch. This implies that SQL Server will eliminate one branch entirely from execution plan on the basis that for the current execution it can already determine which branch is needed. Is this really true? In addition what happens in this case:- IF EXISTS (SELECT ....) BEGIN SELECT ... something END ELSE BEGIN SELECT ... something else END where it's not possible to determine in advance which branch will be executed?

    Read the article

  • Int PK inner join Vs Guid PK inner Join on SQL Server. Execution plan.

    - by bigb
    I just did some testing for Int PK join Vs Guid PK. Tables structure and number of records looking like that: Performance of CRUD operations using EF4 are pretty similar in both cases. As we know Int PK has better performance rather than strings. So SQL server execution plan with INNER JOINS are pretty different Here is an execution plan. As i understand according with execution plan from attached image Int join has better performance because it is taking less resources for Clustered index scan and it is go in two ways, am i right? May be some one may explain this execution plan in more details?

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • Where can I go to learn how to read a sql server execution plan?

    - by Chris Lively
    I'm looking for resources that can teach me how to properly read a sql server execution plan. I'm a long time developer, with tons of sql server experience, but I've never really learned how to really understand what an execution plan is saying to me. I guess I'm looking for links, books, anything that can describe things like whether a clustered index scan is good or bad along with examples on how to fix issues.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • How do I convert some ugly inline javascript into a function?

    - by Taylor
    I've got a form with various inputs that by default have no value. When a user changes one or more of the inputs all values including the blank ones are used in the URL GET string when submitted. So to clean it up I've got some javascript that removes the inputs before submission. It works well enough but I was wondering how to put this in a js function or tidy it up. Seems a bit messy to have it all clumped in to an onclick. Plus i'm going to be adding more so there will be quite a few. Here's the relevant code. There are 3 seperate lines for 3 seperate inputs. The first part of the line has a value that refers to the inputs ID ("mf","cf","bf","pf") and the second part of the line refers to the parent div ("dmf","dcf", etc). The first part is an example of the input structure... echo "<div id='dmf'><select id='mf' name='mFilter'>"; This part is the submit and js... echo "<input type='submit' value='Apply' onclick='javascript: if (document.getElementById(\"mf\").value==\"\") { document.getElementById(\"dmf\").innerHTML=\"\"; } if (document.getElementById(\"cf\").value==\"\") { document.getElementById(\"dcf\").innerHTML=\"\"; } if (document.getElementById(\"bf\").value==\"\") { document.getElementById(\"dbf\").innerHTML=\"\"; } if (document.getElementById(\"pf\").value==\"\") { document.getElementById(\"dpf\").innerHTML=\"\"; } ' />"; I have pretty much zero javascript knowledge so help turning this in to a neater function or similar would be much appreciated.

    Read the article

  • How do I copy a python function to a remote machine and then execute it?

    - by Hugh
    I'm trying to create a construct in Python 3 that will allow me to easily execute a function on a remote machine. Assuming I've already got a python tcp server that will run the functions it receives, running on the remote server, I'm currently looking at using a decorator like @execute_on(address, port) This would create the necessary context required to execute the function it is decorating and then send the function and context to the tcp server on the remote machine, which then executes it. Firstly, is this somewhat sane? And if not could you recommend a better approach? I've done some googling but haven't found anything that meets these needs. I've got a quick and dirty implementation for the tcp server and client so fairly sure that'll work. I can get a string representation the function (e.g. func) being passed to the decorator by import inspect string = inspect.getsource(func) which can then be sent to the server where it can be executed. The problem is, how do I get all of the context information that the function requires to execute? For example, if func is defined as follows, import MyModule def func(): result = MyModule.my_func() MyModule will need to be available to func either in the global context or funcs local context on the remote server. In this case that's relatively trivial but it can get so much more complicated depending on when and how import statements are used. Is there an easy and elegant way to do this in Python? The best I've come up with at the moment is using the ast library to pull out all import statements, using the inspect module to get string representations of those modules and then reconstructing the entire context on the remote server. Not particularly elegant and I can see lots of room for error. Thanks for your time

    Read the article

  • Why does the interpretted order seem different from what I expect?

    - by inspectorG4dget
    I have a problem that I have not faced before: It seems that the order of interpretation in my program is somehow different from what I expect. I have written a small Twitter client. It takes a few seconds for my program to actually post a tweet after I click the "GO" button (which can also be activated by hitting ENTER on the keyboard). I don't want to click multiple times within this time period thinking that I hadn't clicked it the first time. Therefore, when the button is clicked, I would like the label text to display something that tells me that the button has been clicked. I have implemented this message by altering the label text before I send the tweet across. However, for some reason, the message does not display until the tweet has been attempted. But since I have a confirmation message after the tweet, I never get to see this message and my original problem goes unsolved. I would really appreciate any help. Here is the relevant code: class SimpleTextBoxForm(Form): def init(self): # set window properties self.Text = "Tweeter" self.Width = 235 self.Height = 250 #tweet away self.label = Label() self.label.Text = "Tweet Away..." self.label.Location = Point(10, 10) self.label.Height = 25 self.label.Width = 200 #get the tweet self.tweetBox = TextBox() self.tweetBox.Location = Point(10, 45) self.tweetBox.Width = 200 self.tweetBox.Height = 60 self.tweetBox.Multiline = True self.tweetBox.WordWrap = True self.tweetBox.MaxLength = 140; #ask for the login ID self.askLogin = Label() self.askLogin.Text = "Login:" self.askLogin.Location = Point(10, 120) self.askLogin.Height = 20 self.askLogin.Width = 60 self.login = TextBox() self.login.Text= "" self.login.Location = Point(80, 120) self.login.Height = 40 self.login.Width = 100 #ask for the password self.askPass = Label() self.askPass.Text = "Password:" self.askPass.Location = Point(10, 150) self.askPass.Height = 20 self.askPass.Width = 60 # display password box with character hiding self.password = TextBox() self.password.Location = Point(80, 150) self.password.PasswordChar = "x" self.password.Height = 40 self.password.Width = 100 #submit button self.button1 = Button() self.button1.Text = 'Tweet' self.button1.Location = Point(10, 180) self.button1.Click += self.update self.AcceptButton = self.button1 #pack all the elements of the form self.Controls.Add(self.label) self.Controls.Add(self.tweetBox) self.Controls.Add(self.askLogin) self.Controls.Add(self.login) self.Controls.Add(self.askPass) self.Controls.Add(self.password) self.Controls.Add(self.button1) def update(self, sender, event): if not self.password.Text: self.label.Text = "You forgot to enter your password..." else: self.tweet(self.tweetBox.Text, self.login.Text, self.password.Text) def tweet(self, msg, login, password): self.label.Text = "Attempting Tweet..." # this should be executed before sending the tweet is attempted. But this seems to be executed only after the try block try: success = 'Tweet successfully completed... yay!\n' + 'At: ' + time.asctime().split()[3] ServicePointManager.Expect100Continue = False Twitter().UpdateAsXML(login, password, msg) except: error = 'Unhandled Exception. Tweet unsuccessful' self.label.Text = error else: self.label.Text = success self.tweetBox.Text = ""

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >