Search Results

Search found 13969 results on 559 pages for 'word count'.

Page 145/559 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Algorithm for detecting windows in a room

    - by user2733436
    I am dealing with the following problem and i was looking to write a Pseudo-code for developing an algorithm that can be generic for such a problem. Here is what i have come up with thus far. STEP 1 In this step i try to get the robot where it maybe placed to the top left corner. Turn Left - If no window or Wall detected keep going forward 1 unit.. if window or wall detected -Turn right -- if no window or Wall detected keep going forward.. if window or wall detected then top left corner is reached. STEP 2 (We start counting windows after we get to this stage to avoid miscounting) I would like to declare a variable called turns as it will help me keep track if robot has gone around entire room. Turns = 4; Now we are facing north and placed on top left corner. while(turns0){ If window or wall detected (if window count++) Turn Right Turn--; While(detection!=wall || detection!=window){ move 1 unit forward Turn left (if window count++) Turn right } } I believe in doing so the robot will go around the entire room and count windows and it will stop once it has gone around the entire room as the turns get decremented. I don't feel this is the best solution and would appreciate suggestions on how i can improve my Pseudo-code. I am not looking for any code just a algorithm on solving such a problem and that is why i have not posted this in stack overflow. I apologize if my Pseudo-code is poorly written please make suggestions if i can improve that as i am new to this. Thanks.

    Read the article

  • Prevent Click Fraud in Advertisement system with PHP and Javascript

    - by CodeDevelopr
    I would like to build an Advertising project with PHP, MySQL, and Javascript. I am talking about something like... Google Adsense BuySellAds.com Any other advertising platform My question is mainly, what do I need to look out for to prevent people cheating the system and any other issues I may encounter? My design concept. An Advertisement is a record in the Database, when a page is loaded, using Javascript, it calls my server which in turn will use a PHP script to query the Database and get a random Advertisement. (It may do kore like get an ad based on demographics or other criteria as well) The PHP script will then return the Advertisement to the server/website that is calling it and show it on the page as an Image that will have a special tracking link. I will need to... Count all impressions (when the Advertisement is shown on the page) Count all clicks on the Advertisement link Count all Unique clicks on the Advertisement link My question is purely on the query and displaying of the Advertisement and nothing to do with the administration side. If there is ever money involved with my Advertisement buying/selling of adspace, then the stats need to be accurate and make sure people can't easily cheat the system. Is tracking IP address really the only way to try to prevent click fraud? I am hoping someone with some experience can clarify I am on the right track? As well as give me any advice, tips, or anything else I should know about doing something like this?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • How do I draw a dotted or dashed line?

    - by Gagege
    I'm trying to draw a dashed or dotted line by placing individual segments(dashes) along a path and then separating them. The only algorithm I could come up with for this gave me a dash length that was variable based on the angle of the line. Like this: private function createDashedLine(fromX:Float, fromY:Float, toX:Float, toY:Float):Sprite { var line = new Sprite(); var currentX = fromX; var currentY = fromY; var addX = (toX - fromX) * 0.0075; var addY = (toY - fromY) * 0.0075; line.graphics.lineStyle(1, 0xFFFFFF); var count = 0; // while line is not complete while (!lineAtDestination(fromX, fromY, toX, toY, currentX, currentY)) { /// move line draw cursor to beginning of next dash line.graphics.moveTo(currentX, currentY); // if dash is even if (count % 2 == 0) { // draw the dash line.graphics.lineTo(currentX + addX, currentY + addY); } // add next dash's length to current cursor position currentX += addX; currentY += addY; count++; } return line; } This just happens to be written in Haxe, but the solution should be language neutral. What I would like is for the dash length to be the same no matter what angle the line is. As is, it's just adding 75 thousandths of the line length to the x and y, so if the line is and a 45 degree angle you get pretty much a solid line. If the line is at something shallow like 85 degrees then you get a nice looking dashed line. So, the dash length is variable, and I don't want that. How would I make a function that I can pass a "dash length" into and get that length of dash, no matter what the angle is? If you need to completely disregard my code, be my guest. I'm sure there's a better solution.

    Read the article

  • T-SQL select where and group by date

    - by bconlon
    T-SQL has never been my favorite language, but I need to use it on a fairly regular basis and every time I seem to Google the same things. So if I add it here, it might help others with the same issues, but it will also save me time later as I will know where to look for the answers!! 1. How do I SELECT FROM WHERE to filter on a DateTime column? As it happens this is easy but I always forget. You just put the DATE value in single quotes and in standard format: SELECT StartDate FROM Customer WHERE StartDate >= '2011-01-01' ORDER BY StartDate 2. How do I then GROUP BY and get a count by StartDate? Bit trickier, but you can use the built in DATEADD and DATEDIFF to set the TIME part to midnight, allowing the GROUP BY to have a consistent value to work on: SELECT DATEADD (d, DATEDIFF(d, 0, StartDate),0) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date] Note: [Customer Creation Date] and [Number Of New Customers] column alias just provide more readable column headers. 3. Finally, how can you format the DATETIME to only show the DATE part (after all the TIME part is now always midnight)? The built in CONVERT function allows you to convert the DATETIME to a CHAR array using a specific format. The format is a bit arbitrary and needs looking up, but 101 is the U.S. standard mm/dd/yyyy, and 103 is the U.K. standard dd/mm/yyyy. SELECT CONVERT(CHAR(10), DATEADD(d, DATEDIFF(d, 0, StartDate),0), 103) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date]  #

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Project Euler Problem 14

    - by MarkPearl
    The Problem The following iterative sequence is defined for the set of positive integers: n n/2 (n is even) n 3n + 1 (n is odd) Using the rule above and starting with 13, we generate the following sequence: 13 40 20 10 5 16 8 4 2 1 It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? NOTE: Once the chain starts the terms are allowed to go above one million. The Solution   public static long NextResultOdd(long n) { return (3 * n) + 1; } public static long NextResultEven(long n) { return n / 2; } public static long TraverseSequence(long n) { long x = n; long count = 1; while (x > 1) { if (x % 2 == 0) x = NextResultEven(x); else x = NextResultOdd(x); count++; } return count; } static void Main(string[] args) { long largest = 0; long pos = 0; for (long i = 1000000; i > 1; i--) { long temp = TraverseSequence(i); if (temp > largest) { largest = temp; pos = i; } } Console.WriteLine("{0} - {1}", pos, largest); Console.ReadLine(); }

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Google Analytics: Do unique events report as unique visits when triggered on pages other than your own domain?

    - by Jesse Gardner
    We just recently attached a SWF to our Brightcove video player to report various events back to Google Analytics. We're also tracking page views with a standard GA snippet on the page where the player is embedded. As I understand it, because a unique has already been recorded for the page, any event being triggered by the player is getting associated with that unique. However, we allow people to embed the video player on other websites. All of the event data started pouring into the Events section as expected, but we noticed a dramatic uptick in unique visitors on the site (nearly double) while the pageview count stayed relatively unchanged. Disabling event tracking brought the traffic back down to average levels. I should also add that in the Pages section of Event tracking we're seeing URLs for other sites where the player has been embedded; but this data isn't showing up in the Content section. It seems counterintuitive, but does GA count an event fired as a unique visit even if it's triggered from some place other than your website? Is so, there any way to trigger an event in the events section without it reporting to the unique visitor count?

    Read the article

  • AR242x / AR542x wireless card not working

    - by Pipan87
    My wifi worked perfect until I updated to the latest version of Ubuntu. Now I don't find any wireless connections at all. I have tried lots of guides on the internet but I can't get it to work. I did however start to work once after writing something I don't remember in Terminal, but after rebooting it stopped working again. Some info (don't know if you need more to help): 01:00.0 Ethernet controller: Atheros Communications AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0) Subsystem: Acer Incorporated [ALI] Device 022c Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at 55200000 (64-bit, non-prefetchable) [size=256K] I/O ports at 3000 [size=128] Capabilities: [40] Power Management version 2 Capabilities: [48] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [58] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [180] Device Serial Number ff-93-2e-de-00-23-8b-ff Kernel driver in use: ATL1E Kernel modules: atl1e 02:00.0 Ethernet controller: Atheros Communications Inc. AR242x / AR542x Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Foxconn International, Inc. Device e00d Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at 54100000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath5k Kernel modules: ath5k

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • Desktop Fun: Merry Christmas Fonts

    - by Asian Angel
    Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Note: To manage the fonts on your Windows 7, Vista, & XP systems see our article here. Oldchristmas Download Holly Download Christmas Flakes *includes two font types Download Frosty Download Kingthings Christmas Download Candy Time Download BodieMF Holly Download Snowfall Download Snowflake Letters Download Hultog Snowdrift Download AlphaShapes Xmas Trees Download Christmas Tree Download PF Wreath Download Snowy Caps Download PF Snowman *includes three font types Note: Shown in all capital letters here. Download BJF Holly Bells Download Christbaumkugeln Download Xmas Lights Download XmasDings *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download WWFlakes *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download For Christmas Card creating fun and a great way to use your new fonts see our MS Word Christmas Card project series here. Design and Print Your Own Christmas Cards in MS Word, Part 1 Design and Print Your Own Christmas Cards in MS Word, Part 2: How to Print Want more great ways to customize your computer? Then be certain to look through our Desktop Fun section. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – DBA or DBD? – Database Administrator or Database Developer

    - by pinaldave
    Earlier this month, I had poll on this blog where I asked question – Are you a Database Administrator or Database Developer? The word DBA (Database Administrator) is very common but DBD (Database Developer) is not common at all. This made me think – what is the ratio of the same. Here the result of the poll: Database Administrator 36.6% (254 votes) Database Developer 63.4% (440 votes) Total Votes: 694 This is open poll, if you want you can still participate here. Vote your Voice – DBD or DBA? I think it is the time when DBD word for Database Developer gets place in our dictionary. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, DBA, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • What would you do if you just had this code dumped in your lap?

    - by chickeninabiscuit
    Man, I just had this project given to me - expand on this they say. This is an example of ONE function: <?php //500+ lines of pure wonder. function page_content_vc($content) { global $_DBH, $_TPL, $_SET; $_SET['ignoreTimezone'] = true; lu_CheckUpdateLogin(); if($_SESSION['dash']['VC']['switch'] == 'unmanned' || $_SESSION['dash']['VC']['switch'] == 'touchscreen') { if($content['page_name'] != 'vc') { header('Location: /vc/'); die(); } } if($_GET['l']) { unset($_SESSION['dash']['VC']); if($loc_id = lu_GetFieldValue('ID', 'Location', $_GET['l'])) { if(lu_CheckPermissions('vc', $loc_id)) { $timezone = lu_GetFieldValue('Time Zone', 'Location', $loc_id, 'ID'); if(strlen($timezone) > 0) { $_SESSION['time_zone'] = $timezone; } $_SESSION['dash']['VC']['loc_ID'] = $loc_id; header('Location: /vc/'); die(); } } } if($_SESSION['dash']['VC']['loc_ID']) { $timezone = lu_GetFieldValue('Time Zone', 'Location', $_SESSION['dash']['VC']['loc_ID'], 'ID'); if(strlen($timezone) > 0) { $_SESSION['time_zone'] = $timezone; } $loc_id = $_SESSION['dash']['VC']['loc_ID']; $org_id = lu_GetFieldValue('record_ID', 'Location', $loc_id); $_TPL->assign('loc_id', $loc_id); $location_name = lu_GetFieldValue('Location Name', 'Location', $loc_id); $_TPL->assign('LocationName', $location_name); $customer_name = lu_GetFieldValue('Customer Name', 'Organisation', $org_id); $_TPL->assign('CustomerName', $customer_name); $enable_visitor_snap = lu_GetFieldValue('VisitorSnap', 'Location', $loc_id); $_TPL->assign('EnableVisitorSnap', $enable_visitor_snap); $lacps = explode("\n", lu_GetFieldValue('Location Access Control Point', 'Location', $loc_id)); array_walk($lacps, 'trim_value'); if(count($lacps) > 0) { if(count($lacps) == 1) { $_SESSION['dash']['VC']['lacp'] = $lacps[0]; } else { if($_GET['changeLACP'] && in_array($_GET['changeLACP'], $lacps)) { $_SESSION['dash']['VC']['lacp'] = $_GET['changeLACP']; header('Location: /vc/'); die(); } else if(!in_array($_SESSION['dash']['VC']['lacp'], $lacps)) { $_SESSION['dash']['VC']['lacp'] = $lacps[0]; } $_TPL->assign('LACP_array', $lacps); } $_TPL->assign('current_LACP', $_SESSION['dash']['VC']['lacp']); $_TPL->assign('showContractorSearch', true); /* if($contractorStaff = lu_GetTableRow('ContractorStaff', $org_id, 'record_ID', 'record_Inactive != "checked"')) { foreach($contractorStaff['rows'] as $contractor) { $lacp_rights = lu_OrganiseCustomDataFunctionMultiselect($contractor[lu_GetFieldName('Location Access Rights', 'ContractorStaff')]); if(in_array($_SESSION['dash']['VC']['lacp'], $lacp_rights)) { $_TPL->assign('showContractorSearch', true); } } } */ } $selectedOptions = explode(',', lu_GetFieldValue('Included Fields', 'Location', $_SESSION['dash']['VC']['loc_ID'])); $newOptions = array(); foreach($selectedOptions as $selOption) { $so_array = explode('|', $selOption, 2); if(count($so_array) > 1) { $newOptions[$so_array[0]] = $so_array[1]; } else { $newOptions[$so_array[0]] = "Both"; } } if($newOptions[lu_GetFieldName('Expected Length of Visit', 'Visitor')]) { $alert = false; if($visitors = lu_OrganiseVisitors( lu_GetTableRow('Visitor', 'checked', lu_GetFieldName('Checked In', 'Visitor'), lu_GetFieldName('Location for Visit', 'Visitor').'="'.$_SESSION['dash']['VC']['loc_ID'].'" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'), false, true, true)) { foreach($visitors['rows'] as $key => $visitor) { if($visitor['expected'] && $visitor['expected'] + (60*30) < time()) { $alert = true; } } } if($alert == true) { $_TPL->assign('showAlert', 'red'); } else { //$_TPL->assign('showAlert', 'green'); } } $_TPL->assign('switch', $_SESSION['dash']['VC']['switch']); if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { $_TPL->assign('VC_unmanned', true); } if($_GET['check'] == 'in') { if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { lu_CheckInTouchScreen(); } else { lu_CheckIn(); } } else if($_GET['check'] == 'out') { if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { lu_CheckOutTouchScreen(); } else { lu_CheckOut(); } } else if($_GET['switch'] == 'unmanned') { $_SESSION['dash']['VC']['switch'] = 'unmanned'; if($_GET['printing'] == true && (lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "")) { $_SESSION['dash']['VC']['printing'] = true; } else { $_SESSION['dash']['VC']['printing'] = false; } header('Location: /vc/'); die(); } else if($_GET['switch'] == 'touchscreen') { $_SESSION['dash']['VC']['switch'] = 'touchscreen'; if($_GET['printing'] == true && (lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "")) { $_SESSION['dash']['VC']['printing'] = true; } else { $_SESSION['dash']['VC']['printing'] = false; } header('Location: /vc/'); die(); } else if($_GET['switch'] == 'manned') { if($_POST['password']) { if(md5($_POST['password']) == $_SESSION['dash']['password']) { unset($_SESSION['dash']['VC']['switch']); //setcookie('email', "", time() - 3600); //setcookie('location', "", time() - 3600); header('Location: /vc/'); die(); } else { $_TPL->assign('switchLoginError', 'Incorrect Password'); } } $_TPL->assign('switchLogin', 'true'); } else if($_GET['m'] == 'visitor') { lu_ModifyVisitorVC(); } else if($_GET['m'] == 'enote') { lu_ModifyEnoteVC(); } else if($_GET['m'] == 'medical') { lu_ModifyMedicalVC(); } else if($_GET['print'] == 'label' && $_GET['v']) { lu_PrintLabelVC(); } else { unset($_SESSION['dash']['VC']['checkin']); unset($_SESSION['dash']['VC']['checkout']); $_TPL->assign('icon', 'GroupCheckin'); if($_SESSION['dash']['VC']['switch'] != 'unmanned' && $_SESSION['dash']['VC']['switch'] != 'touchscreen') { $staff_ids = array(); if($staffs = lu_GetTableRow('Staff', $_SESSION['dash']['VC']['loc_ID'], 'record_ID')) { foreach($staffs['rows'] as $staff) { $staff_ids[] = $staff['ID']; } } if($_GET['view'] == "tomorrow") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m") , date("d")+1, date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m") , date("d")+1, date("Y"))); } else if($_GET['view'] == "month") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d"), date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d")+30, date("Y"))); } else if($_GET['view'] == "week") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d"), date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d")+7, date("Y"))); } else { $dateStart = date('Y-m-d'); $dateEnd = date('Y-m-d'); } if(lu_GetFieldValue('Enable Survey', 'Location', $_SESSION['dash']['VC']['loc_ID']) == 'checked' && lu_GetFieldValue('Add Survey', 'Location', $_SESSION['dash']['VC']['loc_ID']) == 'checked') { $_TPL->assign('enableSurvey', true); } //lu_GetFieldName('Checked In', 'Visitor') //!= "checked" //date('d/m/Y'), lu_GetFieldName('Date of Visit', 'Visitor') if($visitors = lu_OrganiseVisitors(lu_GetTableRow('Visitor', $_SESSION['dash']['VC']['loc_ID'], lu_GetFieldName('Location for Visit', 'Visitor'), lu_GetFieldName('Checked In', 'Visitor').' != "checked" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked" AND '.lu_GetFieldName('Date of Visit', 'Visitor').' >= "'.$dateStart.'" AND '.lu_GetFieldName('Date of Visit', 'Visitor').' <= "'.$dateEnd.'"'))) { foreach($visitors['days'] as $day => $visitors_day) { foreach($visitors_day['rows'] as $key => $visitor) { $visitors['days'][$day]['rows'][$key]['visiting'] = lu_GetTableRow('Staff', $visitor['record_ID'], 'ID'); $visitors['days'][$day]['rows'][$key]['visiting']['notify'] = $_DBH->getRow('SELECT * FROM lu_notification WHERE ent_ID = "'.$visitor['record_ID'].'"'); } } //array_dump($visitors); $_TPL->assign('visitors', $visitors); } if($_GET['conGroup']) { if($_GET['action'] == 'add') { $_SESSION['dash']['VC']['conGroup'][$_GET['conGroup']] = $_GET['conGroup']; } else { unset($_SESSION['dash']['VC']['conGroup'][$_GET['conGroup']]); } } if(count($_SESSION['dash']['VC']['conGroup']) > 0) { if($conGroupResult = lu_GetTableRow('ContractorStaff', '1', '1', ' ID IN ('.implode(',', $_SESSION['dash']['VC']['conGroup']).')')) { if($_POST['_submit'] == 'Check-In Group >>') { $form = lu_GetForm('VisitorStandard'); $standarddata = array(); foreach($form['items'] as $key=>$item) { $standarddata[$key] = $_POST[lu_GetFieldName($item['name'], 'Visitor')]; } foreach($conGroupResult['rows'] as $conStaff) { $data = $standarddata; foreach($form['items'] as $key=>$item) { if($key != 'ID' && $key != 'record_ID' && $conStaff[lu_GetFieldName(lu_GetNameField($key, 'Visitor'), 'ContractorStaff')]) { $data[$key] = $conStaff[lu_GetFieldName(lu_GetNameField($key, 'Visitor'), 'ContractorStaff')]; } } $data['record_ID'] = $data[lu_GetFieldName('Visiting', 'Visitor')]; $data[lu_GetFieldName('Date of Visit', 'Visitor')] = date('Y-m-d'); $data[lu_GetFieldName('Time of Visit', 'Visitor')] = date('H:i'); $data[lu_GetFieldName('Checked In', 'Visitor')] = 'checked'; $data[lu_GetFieldName('Location for Visit', 'Visitor')] = $_SESSION['dash']['VC']['loc_ID']; $data[lu_GetFieldName('ConStaff ID', 'Visitor')] = $conStaff['ID']; $data[lu_GetFieldName('From', 'Visitor')] = lu_GetFieldValue('Legal Name', 'Contractor', $conStaff[lu_GetFieldName('Contractor', 'ContractorStaff')]); $id = lu_UpdateData($form, $data); lu_VisitorCheckIn($id); //array_dump($data); //array_dump($id); } unset($_SESSION['dash']['VC']['conGroup']); header('Location: /vc/'); die(); } if(count($conGroupResult['rows'])) { foreach($conGroupResult['rows'] as $key => $cstaff) { $conGroupResult['rows'][$key]['contractor'] = lu_GetTableRow('Contractor', $cstaff[lu_GetFieldName('Contractor', 'ContractorStaff')], 'ID'); } $_TPL->assign('conGroupResult', $conGroupResult); } $conGroupForm = lu_GetForm('VisitorConGroup'); $conGroupForm = lu_OrganiseVisitorForm($conGroupForm, $_SESSION['dash']['VC']['loc_ID'], 'Contractor'); $secure_options_array = lu_GetSecureOptions($org_id); if($secure_options_array[$_SESSION['dash']['VC']['loc_ID']]) { $conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]['options']['values'] = $secure_options_array[$_SESSION['dash']['VC']['loc_ID']]; $conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]['name'] = 'Secure Area'; } else { unset($conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]); } if($secure_options_array) { $form['items'][lu_GetFieldName('Secure Area', 'Visitor')]['options']['values'] = $secure_options_array; $form['items'][lu_GetFieldName('Secure Area', 'Visitor')]['name'] = 'Secure Area'; } else { unset($form['items'][lu_GetFieldName('Secure Area', 'Visitor')]); } $_TPL->assign('conGroupForm', $conGroupForm); $_TPL->assign('hideFormCancel', true); } } if($_GET['searchVisitors']) { $_TPL->assign('searchVisitorsQuery', $_GET['searchVisitors']); $where = ''; if($_GET['searchVisitorsIn'] == 'Yes') { $where .= ' AND '.lu_GetFieldName('Checked In', 'Visitor').' = "checked"'; $_TPL->assign('searchVisitorsIn', 'Yes'); } else { $where .= ' AND '.lu_GetFieldName('Checked In', 'Visitor').' != "checked"'; $_TPL->assign('searchVisitorsIn', 'No'); } if($_GET['searchVisitorsOut'] == 'Yes') { $where = ''; $where .= ' AND '.lu_GetFieldName('Checked Out', 'Visitor').' = "checked"'; $_TPL->assign('searchVisitorsOut', 'Yes'); } else { $where .= ' AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'; $_TPL->assign('searchVisitorsOut', 'No'); } if($searchVisitors = lu_OrganiseVisitors(lu_GetTableRow('Visitor', $_GET['searchVisitors'], '#search#', lu_GetFieldName('Location for Visit', 'Visitor').'="'.$_SESSION['dash']['VC']['loc_ID'].'"'.$where))) { foreach($searchVisitors['rows'] as $key => $visitor) { $searchVisitors['rows'][$key]['visiting'] = lu_GetTableRow('Staff', $visitor['record_ID'], 'ID'); } $_TPL->assign('searchVisitors', $searchVisitors); } else { $_TPL->assign('searchVisitorsNotFound', true); } } else if($_GET['searchStaff']) { if($_POST['staff_id']) { if(lu_CheckPermissions('staff', $_POST['staff_id'])) { $_DBH->query('UPDATE '.lu_GetTableName('Staff').' SET '.lu_GetFieldName('Current Location', 'Staff').' = "'.$_POST['current_location'].'" WHERE ID="'.$_POST['staff_id'].'"'); } } $locations = lu_GetTableRow('Location', $org_id, 'record_ID'); if(count($locations['rows']) > 1) { $_TPL->assign('staffLocations', $locations); } $loc_ids = array(); foreach($locations['rows'] as $location) { $loc_ids[] = $location['ID']; } // array_dump($locations); // array_dump($_POST); $_TPL->assign('searchStaffQuery', $_GET['searchStaff']); $where = ' AND record_Inactive != "checked"'; if($_GET['searchStaffIn'] == 'Yes' && $_GET['searchStaffOut'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Staff Status', 'Staff').' = "" OR '.lu_GetFieldName('Staff Status', 'Staff').' = "On-Site")'. $_TPL->assign('searchStaffIn', 'Yes'); $_TPL->assign('searchStaffOut', 'No'); } else if($_GET['searchStaffOut'] == 'Yes' && $_GET['searchStaffIn'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Staff Status', 'Staff').' != "" AND '.lu_GetFieldName('Staff Status', 'Staff').' != "On-Site")'. $_TPL->assign('searchStaffOut', 'Yes'); $_TPL->assign('searchStaffIn', 'No'); } else { $_TPL->assign('searchStaffOut', 'Yes'); $_TPL->assign('searchStaffIn', 'Yes'); } if($searchStaffs = lu_GetTableRow('Staff', $_GET['searchStaff'], '#search#', 'record_ID IN ('.implode(',', $loc_ids).')'.$where, lu_GetFieldName('First Name', 'Staff').','.lu_GetFieldName('Surname', 'Staff'))) { $_TPL->assign('searchStaffs', $searchStaffs); } else { $_TPL->assign('searchStaffNotFound', true); } } else if($_GET['searchContractor']) { $_TPL->assign('searchContractorQuery', $_GET['searchContractor']); //$where = ' AND '.lu_GetTableName('ContractorStaff').'.record_Inactive != "checked"'; $where = ' '; if($_GET['searchContractorIn'] == 'Yes' && $_GET['searchContractorOut'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Onsite Status', 'ContractorStaff').' = "Onsite")'; $_TPL->assign('searchContractorIn', 'Yes'); $_TPL->assign('searchContractorOut', 'No'); } else if($_GET['searchContractorOut'] == 'Yes' && $_GET['searchContractorIn'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Onsite Status', 'ContractorStaff').' != "Onsite")'. $_TPL->assign('searchContractorOut', 'Yes'); $_TPL->assign('searchContractorIn', 'No'); } else { $_TPL->assign('searchContractorOut', 'Yes'); $_TPL->assign('searchContractorIn', 'Yes'); } $join = 'LEFT JOIN '.lu_GetTableName('Contractor').' ON '.lu_GetTableName('Contractor').'.ID = '.lu_GetTableName('ContractorStaff').'.'.lu_GetFieldName('Contractor', 'ContractorStaff'); $extrasearch = array ( lu_GetTableName('Contractor').'.'.lu_GetFieldName('Legal Name', 'Contractor') ); if($searchContractorResult = lu_GetTableRow('ContractorStaff', $_GET['searchContractor'], '#search#', lu_GetTableName('ContractorStaff').'.record_ID = "'.$org_id.'" '.$where, lu_GetFieldName('First Name', 'ContractorStaff').','.lu_GetFieldName('Surname', 'ContractorStaff'), $join, $extrasearch)) { /* foreach($searchContractorResult['rows'] as $key=>$contractor) { $lacp_rights = lu_OrganiseCustomDataFunctionMultiselect($contractor[lu_GetFieldName('Location Access Rights', 'ContractorStaff')]); if(!in_array($_SESSION['dash']['VC']['lacp'], $lacp_rights)) { unset($searchContractorResult['rows'][$key]); } } */ if(count($searchContractorResult['rows'])) { foreach($searchContractorResult['rows'] as $key => $cstaff) { /* if($cstaff[lu_GetFieldName('Onsite_Status', 'Contractor')] == 'Onsite')) { if($visitor['rows'][0][lu_GetFieldName('ConStaff ID', 'Visitor')]) { $_DBH->query('UPDATE '.lu_GetTableName('ContractorStaff').' SET '.lu_GetFieldName('Onsite Status', 'ContractorStaff').' = "" WHERE ID="'.$visitor['rows'][0][lu_GetFieldName('ConStaff ID', 'Visitor')].'"'); } } */ if($cstaff[lu_GetFieldName('SACN Expiry Date', 'ContractorStaff')] != '0000-00-00') { if(strtotime($cstaff[lu_GetFieldName('SACN Expiry Date', 'ContractorStaff')]) < time()) { $searchContractorResult['rows'][$key]['sacn_expiry'] = true; } else { $searchContractorResult['rows'][$key]['sacn_expiry'] = false; } } else { $searchContractorResult['rows'][$key]['sacn_expiry'] = false; } if($cstaff[lu_GetFieldName('Induction Valid Until', 'ContractorStaff')] != '0000-00-00') { if(strtotime($cstaff[lu_GetFieldName('Induction Valid Until', 'ContractorStaff')]) < time()) { $searchContractorResult['rows'][$key]['induction_expiry'] = true; } else { $searchContractorResult['rows'][$key]['induction_expiry'] = false; } } else { $searchContractorResult['rows'][$key]['induction_expiry'] = false; } $searchContractorResult['rows'][$key]['contractor'] = lu_GetTableRow('Contractor', $cstaff[lu_GetFieldName('Contractor', 'ContractorStaff')], 'ID'); } $_TPL->assign('searchContractorResult', $searchContractorResult); } else { $_TPL->assign('searchContractorNotFound', true); } } else { $_TPL->assign('searchContractorNotFound', true); } } $occupancy = array(); $occupancy['staffNumber'] = $_DBH->getOne('SELECT count(*) FROM '.lu_GetTableName('Staff').' WHERE record_ID = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND record_Inactive != "checked" AND '.lu_GetFieldName('Ignore Counts', 'Staff').' != "checked"'); $occupancy['staffNumberOnsite']= $_DBH->getOne( 'SELECT count(*) FROM '.lu_GetTableName('Staff').' WHERE ( (record_ID = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND ('.lu_GetFieldName('Staff Status', 'Staff').' = "" OR '.lu_GetFieldName('Staff Status', 'Staff').' = "On-Site")) OR '.lu_GetFieldName('Current Location', 'Staff').' = "'.$_SESSION['dash']['VC']['loc_ID'].'") AND record_Inactive != "checked" AND '.lu_GetFieldName('Ignore Counts', 'Staff').' != "checked"'); $occupancy['visitorsOnsite'] = $_DBH->getOne('SELECT count(*) FROM '.lu_GetTableName('Visitor').' WHERE '.lu_GetFieldName('Location for Visit', 'Visitor').' = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND '.lu_GetFieldName('Checked In', 'Visitor').' = "checked" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'); $_TPL->assign('occupancy', $occupancy); if($enotes = lu_GetTableRow('Enote', $org_id, 'record_ID', lu_GetFieldName('Note Emailed', 'Enote').' = "0000-00-00" AND '.lu_GetFieldName('Note Passed On', 'Enote').' != "Yes"')) { $_TPL->assign('EnoteNotice', true); } if($medical = lu_GetTableRow('MedicalRoom', $_SESSION['dash']['VC']['loc_ID'], 'record_ID', 'record_Inactive != "Yes"')) { $_TPL->assign('MedicalNotice', true); } if(lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "") { $_TPL->assign('UnmannedPrinting', true); } } else { if($_SESSION['dash']['VC']['printing'] == true) { $_TPL->assign('UnmannedPrinting', true); } } // enable if contractor check-in buttons should be enabled if(lu_GetFieldValue('Enable Contractor Check In', 'Location', $_SESSION['dash']['VC']['loc_ID']) == "checked") { $_TPL->assign('ContractorCheckin', true); } } if($_SESSION['dash']['entity_id'] && $_GET['fixupCon'] == 'true') { $conStaffs = lu_GetTableRow('ContractorStaff', $_SESSION['dash']['ModifyConStaffs']['org_ID'], 'record_ID', '', lu_GetFieldName('First Name', 'ContractorStaff').','.lu_GetFieldName('Surname', 'ContractorStaff')); foreach($conStaffs['rows'] as $key => $cstaff) { if($cstaff[lu_GetFieldName('Site Access Card Number', 'ContractorStaff')] && $cstaff[lu_GetFieldName('Site Access Card Type', 'ContractorStaff')]) { echo $cstaff['ID'].' '; $_DBH->query('UPDATE '.lu_GetTableName('Visitor').' SET '.lu_GetFieldName('Site Access Card Number', 'Visitor').' = "'.$cstaff[lu_GetFieldName('Site Access Card Number', 'ContractorStaff')].'", '.lu_GetFieldName('Site Access Card Type', 'Visitor').' = "'.$cstaff[lu_GetFieldName('Site Access Card Type', 'ContractorStaff')].'" WHERE '.lu_GetFieldName('ConStaff ID', 'Visitor').'="'.$cstaff['ID'].'"'); } } } } else { if($_SESSION['dash']['staffs']) { foreach($_SESSION['dash']['staffs']['rows'] as $staff) { if($staff[lu_GetFieldName('Reception Manager', 'Staff')] == 'checked') { $loc_id = $staff['record_ID']; unset($_SESSION['dash']['VC']); if($loc_id = lu_GetFieldValue('ID', 'Location', $loc_id)) { $_SESSION['dash']['VC']['loc_ID'] = $loc_id; header('Location: /vc/'); die(); } } } } $_TPL->assign('mode', 'public'); } $content['page_content'] = $_TPL->fetch('modules/vc.htm'); return $content; } ?> die();die();die();die();die(); This question will probably be closed - i just need some support from my coding brothers and sisters. *SOB*

    Read the article

  • keyword stuffing in SEO

    - by Andrej
    i have a web shop, and on some of the pages some keyword in used a bit more then on the others. for eg. "hp toner" is used preety much in the discription of the product, in the alt tag, in the brand, and so on, an if i have let's say 100 of these products on the "HP PAGE", that means that "hp toner" is gonna show up at least 200 times more than some other rendom word... but the keyword stuffing is not intentional here.. it's just that, the quantity of the product is bigger, and so is that word that describes it.. is that considered keyword stuffing in SEO terms?

    Read the article

  • CodePlex Daily Summary for Friday, March 26, 2010

    CodePlex Daily Summary for Friday, March 26, 2010New Projects.NET settings class generator T4 templates: A couple of T4 templates to generate a Settings class for your .NET project. Allows you to define your application settings in an XML file and have...AlphaPagedList: AlphaPagedList makes it easier for .Net developers to write paging code. Based on PagedList it allows you to take any List<T> and split it based on...C# Projects: C# ProjectsChitme: Aenean feugiat pharetra enim rhoncus viverra. In at nunc nec sem varius bibendum. Aliquam erat volutpat. Nullam fringilla facilisis massa et eleife...CloudCache - Distributed Cache Tier with Azure: Cloudcache makes it easier for you to manage and deploy a distributed caching tier to Windows Azure. Included is a web-dashboard in MVC 2.0, Memcac...Composer: Composer is an extensible Compositional Architecture framework, providing a set of functionality such as Inversion of Control container (IoC), Depe...Data Connection Suite: Data Connection Suite is a set of easy to use data connection string builder dialogs & controls ready to be integrated in any .NET application.DatabaseHandler: Database HandlerEPiServer Blog Page Provider: A example page provider implementation for EPiServer that supports external blog sources for pages, Blogger and WordPress supported out of the box ...Extended MessageBox: ExtendedMessageBox makes it easier to display messages from your Windows applications. Based on the built-in .NET MessageBox class functionality, i...FluentPath: FluentPath implements a modern wrapper around System.IO, using modern patterns such as fluent APIs and Lambdas. By using FluentPath instead of Syst...Halcyone : Silverlight without pain: Halcyone is application framework for Silverlight that should make live of developers easier =)IlluminaRT: Real-time renderingme2: Mista Engine 2MessegeBox RightToLeft Lib: This is really simple lib project for use RTL in MessegeBox class. This just for short code and default option for RTL.MS Word Automation Service: A MS Word Automation service that comsumes a Word template and combines with XML to produce a word document. Currently in production. Must add some...SharePoint - Site Request InfoPath Form Template: This template allow portal user to enter initial information for requesting of creating a new SharePoint site. TextFlow - Text Editor: TextFlow is a fast and light text editor that simplifies day-to-day tasks. You can create letters and documents through TextFlow. It also includes ...TiledLib: A library for using Tiled (http://mapeditor.org) levels in XNA Game Studio projects. Includes a content pipeline extension and runtime library.wcf learning 2010: myWCFprojectsNew Releases.NET settings class generator T4 templates: Example 1: An example project containing the T4 templates and associated files. SingleSite - generate settings for a single site MultiSite - generate setting...AccessibilityChecker: Accessibility Checker V0.1: SharePoint Accessibility Checker V0.1AlphaPagedList: AlphaPagedList v0.9: Initial release of AlphaPagedListASP.Net RIA Controls: Version 1.1 Beta: New XHTML compliant version with alternative content support if no plugin installed.Business & System Analysis Templates and Best Practices: R 00: You may find out here the structured on my own materials from from Luxoft ReqLabs 2009 + short presentation about System Analysis and Modelling. Th...CloudCache - Distributed Cache Tier with Azure: v1.0.0.0: First release! More information at http://blog.shutupandcode.net/?p=935CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.39: implemented client side functions on remainder of account pagesDevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3d: Alpha 3d is a general bug fix -tweaking pagination, navigation, packaging, file system storage, page validation, security, locals, and linked views.Digital Media Processing Project 1: Image Processor: Image Processor 1.01: Supports opening files through Windows Explorer or by drag and drop.Extended MessageBox: ExtendedMessageBox Runtime Version 1.2: Initial releaseExtended MessageBox: SourceCode for Version 1.2: Initial SourceCodeFluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0: Fluent Ribbon Control Suite 1.0 Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Samples Foundation (T...FluentPath: FluentPath Beta: The Beta release of FluentPath.HaterAide ORM: HaterAide ORM 1.5: This version is a, more or less, rewrite of the code base. Also many new features have been added in this release: 1) Foreign keys are now added to...iTuner - The iTunes Companion: iTuner 1.2.3735 Beta: V1.2 allows you to synchronize one or more iTunes playlists to a USB MP3 player. This continues the evolution yet maintains the minimalistic appro...LogWin-Logging Your Computer Activities: LogWin-Logging your computer activities: This program is logging your computer activities and display them as table and pie chart. It is made by native C , HTML Dialog and Google Chart API.MessegeBox RightToLeft Lib: MessegeBoxRTL-1.0.0.0_BIN: My First upload.. This is binary release only. Have fun.MessegeBox RightToLeft Lib: MessegeBoxRTL-1.0.0.0_SRC: My first upload.. This is source code with binary. Have fun.MS Word Automation Service: Alpha: In production already, but who cares. It works.MultiMenu ASP.NET Cascading Menu WebControl: MultiMenu 2.6 ASP.NET Menu: Fixed problems that prevented the menu from working with the XHTML DocTypes Added support for IE 7-8 Added XmlLoading and XmlLoaded events Ad...netgod: LanyoWebBrowser: Lanyo ERP ClientnopCommerce. Open Source online shop e-commerce solution.: nopCommerce 1.50: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/ReleaseNotes.aspx).Open NFe: Open NFe v1.9.7: Fontes do DANFe 1.9.7 Trim na conversão TXT para XMLpatterns & practices - Smart Client Guidance: Smart Client Software Factory 2010 Beta Source: The Smart Client Software Factory 2010 provides an integrated set of guidance that assists architects and developers in creating composite smart cl...Physics Helper for Silverlight, WPF, Blend, and Farseer: PhysicsHelper 3.0.0.5 Alpha: This release supports Windows Phone 7 Series Development, along with the Silverlight 3 and WPF support. It requires Visual Studio 2010, plus the Wi...Protein Insight: ProteinInsight V2.0.1: Protein Insight is protein structure visualization system. Visualization rendering engine is based on native C and Direct3D, plug-in is based on CL...PSFGeneric: ERP / CRM business management and administration: PSFGeneric 1.4.0.9000 Manual and power-ups ASNIA: PSFGeneric 1.4.0.9000 Tareas 2.1.0 MySQL Persistente 1.0.3 TM-U220 40 col. Driver 1.0.0 Gestor Contable Básico 1.1.2.1 Cafetería 1.1.6 Catalogo 1....QuestTracker: QuestTracker 0.2: Primary new feature: Import/Export Quest Log. Deleting anything will cause an automatic export prior to deletion, automatically backing up your log...Reusable Library: V1.0.5: A collection of reusable abstractions for enterprise application developer.Reusable Library Demo: Reusable Library Demo v1.0.3: A demonstration of reusable abstractions for enterprise application developerSharePoint - Site Request InfoPath Form Template: SharePoint - Site Request InfoPath Form Template: This template allow portal user to enter initial information for requesting of creating a new SharePoint site To install: 1. Run the SiteRequest.m...Silverlight Gantt Chart: Silverlight Gantt Chart 1.2: Updates include ability to add GanttNodeSections that allow for multiple GanttItems in a single row.Spiral Architecture Driven Development (SADD): SADD v.1.0: This is the First complete Release with the NEW materials now all in English ! The abstract from the main article named "SADD-MSAJ-The Spiral Arc...Spiral Architecture Driven Development (SADD) for Russian: SADD v.1.0: Это Первая Версия полного релиза SADD на русском языке. Отрывок из этой статьи опубликован в Microsoft Architecture Journal #23, вы можете найти в ...Sprite Sheet Packer: 2.3 Release: SpriteSheetPacker now supports saved user settings so the app will now remember your previous values for padding, image size, image options, whethe...Standalone XQuery Implementation in .NET: 1.4: This is version 1.4 of the QueryMachine.XQuery. It's includes bug fixes and performance optimization. Document load time is dramatically increased...TextFlow - Text Editor: Kernel: TextFlow core KernelTextFlow - Text Editor: TextFlow Beta 3 Technical Preview: This is a technical preview of TextFlow and is made to run for 40 days after which it will expire. Changes : 140 Bug fixes Supports Windows(R) 7...TiledLib: TiledLib 1.0: First release of TiledLib. This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab to download the...UnGrouper: Current build: This is a preview build. Hide and show the main window with winkey+a. IMPORTANT NOTE: You must close all applications before launching this build ...VCC: Latest build, v2.1.30325.0: Automatic drop of latest buildWCF Metal: WCFMetal 0.3.0.0: WCFMetal 0.3.0.0Copyright © 2010 John Leitch Distributed under the terms of the GNU General Public License Summary By utilizing LINQ to SQL gene...Web Log Analyzer: Release Indihiang 1.0: For installation and how to use, please read Indihiang portal: http://wiki.indihiang.com What's New in Indihiang 1.0 ? check http://geeks.netindone...異世界の新着動画: Ver. 10-03-25: ニコ生仕様に対応Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics EngineFacebook Developer ToolkitLINQ to TwitterFluent Ribbon Control SuiteTable2ClassNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcel

    Read the article

  • How to stop UITableView moveRowAtIndexPath from leaving blank rows upon reordering

    - by coneybeare
    I am having an issue where in reordering my UITableViewCells, the tableView is not scrolling with the cell. Only a blank row appears and any subsequent scrolling gets an Array out of bounds error without any of my code in the Stack Trace. Here is a quick video of the problem. Here is the relevant code: - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { return indexPath.section == 1; } - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { BOOL ret = indexPath.section == 1 && indexPath.row < self.count; DebugLog(@"canMoveRowAtIndexPath: %d:%d %@", indexPath.section, indexPath.row, (ret ? @"YES" : @"NO")); return ret; } - (void)delayedUpdateCellBackgroundPositionsForTableView:(UITableView *)tableView { [self performSelectorOnMainThread:@selector(updateCellBackgroundPositionsForTableView:) withObject:tableView waitUntilDone:NO]; } - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { if (fromIndexPath.row == toIndexPath.row) return; DebugLog(@"Moved audio from %d:%d to %d:%d", fromIndexPath.section, fromIndexPath.row, toIndexPath.section, toIndexPath.row); NSMutableArray *audio = [self.items objectAtIndex:fromIndexPath.section]; [audio exchangeObjectAtIndex:fromIndexPath.row withObjectAtIndex:toIndexPath.row]; [self performSelector:@selector(delayedUpdateCellBackgroundPositionsForTableView:) withObject:tableView afterDelay:kDefaultAnimationDuration/3]; } And here is the generated Stack Trace of the crash: Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: iPhone Simulator 3.2 (193.3), iPhone OS 3.0 (7A341) *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray removeObjectsInRange:]: index (6) beyond bounds (6)' Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 CoreFoundation 0x302ac924 ___TERMINATING_DUE_TO_UNCAUGHT_EXCEPTION___ + 4 1 libobjc.A.dylib 0x93cb2509 objc_exception_throw + 56 2 CoreFoundation 0x3028e5fb +[NSException raise:format:arguments:] + 155 3 CoreFoundation 0x3028e55a +[NSException raise:format:] + 58 4 Foundation 0x305684e9 _NSArrayRaiseBoundException + 121 5 Foundation 0x30553a6e -[NSCFArray removeObjectsInRange:] + 142 6 UIKit 0x30950105 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 862 7 UIKit 0x30947715 -[UITableView layoutSubviews] + 250 8 QuartzCore 0x0090bd94 -[CALayer layoutSublayers] + 78 9 QuartzCore 0x0090bb55 CALayerLayoutIfNeeded + 229 10 QuartzCore 0x0090b3ae CA::Context::commit_transaction(CA::Transaction*) + 302 11 QuartzCore 0x0090b022 CA::Transaction::commit() + 292 12 QuartzCore 0x009132e0 CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) + 84 13 CoreFoundation 0x30245c32 __CFRunLoopDoObservers + 594 14 CoreFoundation 0x3024503f CFRunLoopRunSpecific + 2575 15 CoreFoundation 0x30244628 CFRunLoopRunInMode + 88 16 GraphicsServices 0x32044c31 GSEventRunModal + 217 17 GraphicsServices 0x32044cf6 GSEventRun + 115 18 UIKit 0x309021ee UIApplicationMain + 1157 19 XXXXXXXX 0x0000278a main + 104 (main.m:12) 20 XXXXXXXX 0x000026f6 start + 54 NOte that the array out of bounds length is not the length of my elements (I have 9), but always something smaller. I have been trying to solve this for many hours days without avail… any ideas? UPDATE: More code as requested In my delegate: - (UITableViewCellEditingStyle)tableView:(UITableView *)tableView editingStyleForRowAtIndexPath:(NSIndexPath *)indexPath { return UITableViewCellEditingStyleNone; } - (NSIndexPath *)tableView:(UITableView *)tableView targetIndexPathForMoveFromRowAtIndexPath:(NSIndexPath *)sourceIndexPath toProposedIndexPath:(NSIndexPath *)proposedDestinationIndexPath { int count = [(UAPlaylistEditDataSource *)self.dataSource count]; if (proposedDestinationIndexPath.section == 0) { return [NSIndexPath indexPathForRow:0 inSection:sourceIndexPath.section]; }else if (proposedDestinationIndexPath.row >= count) { return [NSIndexPath indexPathForRow:count-1 inSection:sourceIndexPath.section]; } return proposedDestinationIndexPath; } …thats about it. I am using the three20 framework and I have not had any issues with reordering till now. The problem is also not in the updateCellBackgroundPositionsForTableView: method as it still crashes when this is commented out.

    Read the article

  • Set App Windows to Always be on Top

    - by Asian Angel
    Sometimes you have a small app or other software that you want to keep topmost but how do you keep it on top without a lot of hassle? If this sounds like your situation then you might want to have a look at OnTop. Before For our example we had four individual apps open…any of the four could easily be on top at the moment. OnTop in Action The exe file for the app comes in a zip file. Simply unzip the file, place it in the Program Files Folder, create a shortcut and you are ready to go. Once you start OnTop you will see a new System Tray Icon…right click to access the Context Menu with a list of currently open apps. We decided to set Winamp to be always topmost first. Note: OnTop detected all three individual sections of our Winamp Player along with the individual monitors running in our Taskbar. Clicking on Paint.NET brought it forward over Firefox and Microsoft Word but Winamp was still sitting on top. Clicking on Microsoft Word next still did not affect Winamp’s topmost status. Nice. As soon as we switched the topmost status to Microsoft Word you can see that it immediately came to the front. One thing that we did note in our tests…the best method for switching topmost status is either to choose a different app or close the app that was topmost. Conclusion OnTop might be considered niche software but if you have an app window that you need to keep on top of other windows then you might want to give this small app a try. Links Download OnTop at Softpedia Similar Articles Productive Geek Tips Use the Windows Key for the "Start" Menu in Ubuntu LinuxIncrease the Cached Logon Count for Windows Computers on a DomainUsing Windows 7 or Vista Compatibility ModeQuick Tip: Change the Registered Owner in WindowsMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • New regular expression features in PCRE 8.34 and 8.35

    - by Jan Goyvaerts
    PCRE 8.34 adds some new regex features and changes the behavior of a few to make it better compatible with the latest versions of Perl. There are no changes to the regex syntax in PCRE 8.35. \o{377} is now an octal escape just like \377. This syntax was first introduced in Perl 5.12. It avoids any confusion between octal escapes and backreferences. It also allows octal numbers beyond 377 to be used. E.g. \o{400} is the same as \x{100}. If you have any reason to use octal escapes instead of hexadecimal escapes then you should definitely use the new syntax. Because of this change, \o is now an error when it doesn’t form a valid octal escape. Previously \o was a literal o and \o{377} was a sequence of 337 o‘s. In free-spacing mode, whitespace between a quantifier and the ? that makes it lazy or the + that makes it possessive is now ignored. In Perl this has always been the case. In PCRE 8.33 and prior, whitespace ended a quantifier and any following ? or + was seen as a second quantifier and thus an error. The shorthand \s now matches the vertical tab character in addition to the other whitespace characters it previously matched. Perl 5.18 made the same change. Many other regex flavors have always included the vertical tab in \s, just like POSIX has always included it in [[:space:]]. Names of capturing groups are no longer allowed to start with a digit. This has always been the case in Perl since named groups were added to Perl 5.10. PCRE 8.33 and prior even allowed group names to consist entirely of digits. [[:<:]] and [[::]] are now treated as POSIX-style word boundaries. They match at the start and the end of a word. Though they use similar syntax, these have nothing to do with POSIX character classes and cannot be used inside character classes. Perl does not support POSIX word boundaries. The same changes affect PHP 5.5.10 (and later) and R 3.0.3 (and later) as they have been updated to use PCRE 8.34. RegexBuddy and RegexMagic have been updated to support the latest versions of PCRE, PHP, and R. Older versions that were previously supported are still supported, so you can compare or convert your regular expressions between the latest versions of PCRE, PHP, and R and whichever version you were using previously.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 01, 2013

    CodePlex Daily Summary for Tuesday, October 01, 2013Popular ReleasesDotNetNuke® Form and List: 06.00.06: DotNetNuke Form and List 06.00.06 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext Changes to 6.0.1•Scripts now compatible with SQL Azure. Changes to 6.0.0•Icons are shown in module action b...BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release Version 7.0930September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can b...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0Random searcher i pochodne: Generatorek playlisty: Generuje playlisty w formacie .m3u. Na razie beta z bety - ale juz dziala i mozna uzywac.sb0t v.5: sb0t 5.15: Fixed bug in join filter. Fixed bug in pm blocking. Added new Crypto and Entities static classes to scripting. Updated the default node list.Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.9.29): Initial releaseAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...HD-Trailers.NET Downloader: HD-Trailer.Net Downloader v 2.1.5: This started out as an effort to improve the search for the corr3ct IMDB page for the movie. I think I have done that here. I have run about 200 movies and the correct movie was identified in all cases including some entries that were problematic in the past. I also swatted several bugs that popped up under special circumstances and resulted in exceptions. This version should be quite a bit better than previous versions. Let me know if there are any issues.Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...C# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...New Projects.netProject: .Net Project 3TINafs-m: Secure file storageASP.NET dhtmlxChart Class: Create different types of charts and render them to webforms.ASP.NET dhtmxGantt Class: Add tasks Add dependencies Use lighboxBestCodeTrainer: Project BestCode is for Training. Black Dragon Online Shop: An online shop with basic functionalities implemented with ASP.NET MVC as a team project in Telerik Academy 2012/2013.Central fovea -X: ????: 1.???????????Cell,??RGB,HSL,X,Y 2.???????? 3.???????????? 4.????????,???????? 5.????????CorvusSKK: SKK-like Japanese Input Method for Windowsdaneshjoo: ?? ??? ???????DBA Toolbox: A tool for the production DBA. A place to store all those scripts, tools, references and queries that make your job as a DBA easier.Devpad IDE: Basic integrated development environment.DXUT for Direct3D 11: Latest version of the Microsoft DXUT framework for Direct3D 11 Win32 desktop applications formerly in the now legacy DirectX SDK.Dynamics CRM Messaging Integration: This project shows how to implement a simple custom messaging solution using Microsoft Dynamics CRM and based on its standard development components.Kockafölde: Kliens program.Medusa's Ebay: Teamwork assignment @ TelerikAcademy 2012/2013. Project is for creating simple (or more complicated) interneTeamwork assignment @ TelerikAcademy 2012/2013.Nauplius.PAS: Nauplius.PAS allows end users to leverage the PowerPoint Conversion Services within SharePoint 2013 to convert PowerPoint documents from one format to another.PayBox payment gateway provider for NB_Store: PayBox payment provider for NB_StorePet passport: This is a teamwork projectProject Euler solutions: Solutions to the ProjectEuler problemsRoboSubSync: Fixes an unsynchronized subtitle in any language according to an already synced subtitle in english, with the help of some advanced algorithm magic... ShopBook: Shopbooksliders: jQuery slidersSQLite to JSON: A simple program to convert SQLite data to JSON.Super64: This is a Commodore 64 emulator that is determined to have extremely unnecessary accuracy.TeamProject1: This is just Demo ProjectXnaHelper: A collection of tools for making 2D XNA Games

    Read the article

  • Page expired issue with back button and wicket SortableDataProvider and DataTable

    - by David
    Hi, I've got an issue with SortableDataProvider and DataTable in wicket. I've defined my DataTable as such: IColumn<Column>[] columns = new IColumn[9]; //column values are mapped to the private attributes listed in ColumnImpl.java columns[0] = new PropertyColumn(new Model("#"), "columnPosition", "columnPosition"); columns[1] = new PropertyColumn(new Model("Description"), "description"); columns[2] = new PropertyColumn(new Model("Type"), "dataType", "dataType"); Adding it to the table: DataTable<Column> dataTable = new DataTable<Column>("columnsTable", columns, provider, maxRowsPerPage) { @Override protected Item<Column> newRowItem(String id, int index, IModel<Column> model) { return new OddEvenItem<Column>(id, index, model); } }; My data provider: public class ColumnSortableDataProvider extends SortableDataProvider<Column> { private static final long serialVersionUID = 1L; private List list = null; public ColumnSortableDataProvider(Table table, String sortProperty) { this.list = Arrays.asList(table.getColumns().toArray(new Column[0])); setSort(sortProperty, true); } public ColumnSortableDataProvider(List list, String sortProperty) { this.list = list; setSort(sortProperty, true); } @Override public Iterator iterator(int first, int count) { /* first - first row of data count - minimum number of elements to retrieve So this method returns an iterator capable of iterating over {first, first+count} items */ Iterator iterator = null; try { if(getSort() != null) { Collections.sort(list, new Comparator() { private static final long serialVersionUID = 1L; @Override public int compare(Column c1, Column c2) { int result=1; PropertyModel<Comparable> model1= new PropertyModel<Comparable>(c1, getSort().getProperty()); PropertyModel<Comparable> model2= new PropertyModel<Comparable>(c2, getSort().getProperty()); if(model1.getObject() == null && model2.getObject() == null) result = 0; else if(model1.getObject() == null) result = 1; else if(model2.getObject() == null) result = -1; else result = ((Comparable)model1.getObject()).compareTo(model2.getObject()); result = getSort().isAscending() ? result : -result; return result; } }); } if (list.size() (first+count)) iterator = list.subList(first, first+count).iterator(); else iterator = list.iterator(); } catch (Exception e) { e.printStackTrace(); } return iterator; } The problem is the following: - I click a column header to sort by that column. - I navigate to a different page - I click Back (or Forward if I do the opposite scenario) - Page has expired. It'd be nice to generate the page using PageParameters but I somehow need to intercept the sort event to do so. Any pointers would be greatly appreciated. Thanks a ton!! David

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >