Search Results

Search found 11153 results on 447 pages for 'count zero'.

Page 415/447 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • How do I get git whatchanged to show a combined list of files that have changed?

    - by Chirag Patel
    I ran the following comand git whatchanged 7c8358e.. --oneline and got the below output. Is there a way to generate a single combined list of files that changed across all commits? In other words, I don't want files to show up more than once in the below list. Thanks! 4545ed7 refs #2911. error on 'caregivers_sorted_by_position' resolved in this update. it came up randomly in cucumber :100644 100644 d750be7... 11a0bd0... M app/controllers/reporting_controller.rb :100644 100644 7334d4d... e43d9e6... M app/models/user.rb e9b2748 refs #2911. group dropdown filters the list to only the users that belong to the selected group :100644 100644 fc81b9a... d750be7... M app/controllers/reporting_controller.rb :100644 100644 aaf2398... f19038e... M app/models/group.rb :100644 100644 3cc3635... 7a6b2b1... M app/views/reporting/users.html.erb 48149c9 refs #2888 cherry pick 2888 from master into prod-temp :100644 100644 3663ecc... f672b62... M app/controllers/user_admin_controller.rb :100644 100644 aaf2398... 056ea36... M app/models/group.rb :100644 100644 32363ef... bc9a1f2... M app/models/role.rb :100644 100644 91283fa... 7334d4d... M app/models/user.rb :100644 100644 d6393a0... bae1bd6... M app/views/user_admin/roles.html.erb 994550d refs #2890. all requirements included. cucumber has 1 exception in bundle_job for count of data rows. everything else green :100644 100644 145122d... 869a005... M app/controllers/profiles_controller.rb :100644 100644 f1bfa77... 2ed0850... M app/views/alerts/message.html.erb :100644 100644 e9f8a34... f358a74... M app/views/call_list/_item.html.erb :100644 000000 fda1297... 0000000... D app/views/call_list/_load_caregivers.erb :000000 100644 0000000... fda1297... A app/views/call_list/_load_caregivers.html.erb :100644 100644 168de9e... 43594f4... M app/views/call_list/show.html.erb :100644 100644 e178d7f... 0fe77e1... M app/views/profiles/edit_caregiver_profile.html.erb 7396ff6 refs #2890. fixed --we're sorry-- error :100644 100644 d55d46d... fc81b9a... M app/controllers/reporting_controller.rb 7c8358e refs #2897 link on online store back to http://www.halomonitoring.com :100644 100644 d6f94f4... 8bc9c52... M app/views/orders/new.html.erb

    Read the article

  • php , SimpleXML, while loop

    - by Michael
    I'm trying to get some information from ebay api and store it in database . I used simple xml to extract the information but I have a small issue as the information is not displayed for some items . if I make a print to the simple_xml I can see very well that the information is provided by ebay api . I have $items = "220617293997,250645537939,230485306218,110537213815,180519294810"; $number_of_items = count(explode(",", $items)); $xml = $baseClass->getContent("http://open.api.ebay.com/shopping?callname=GetMultipleItems&responseencoding=XML&appid=Morcovar-c74b-47c0-954f-463afb69a4b3&siteid=0&version=525&IncludeSelector=ItemSpecifics&ItemID=$items"); writeDoc($xml, "api.xml"); //echo $xml; $getvalues = simplexml_load_file('api.xml'); // print_r($getvalue); $number = "0"; while($number < 6) { $item_number = $getvalues->Item[$number]->ItemID; $location = $getvalues->Item[$number]->Location; $title = $getvalues->Item[$number]->Title; $price = $getvalues->Item[$number]->ConvertedCurrentPrice; $manufacturer = $getvalues->Item[$number]->ItemSpecifics->NameValueList[3]->Value; $model = $getvalues->Item[$number]->ItemSpecifics->NameValueList[4]->Value; $mileage = $getvalues->Item[$number]->ItemSpecifics->NameValueList[5]->Value; echo "item number = $item_number <br>localtion = $location<br>". "title = $title<br>price = $price<br>manufacturer = $manufacturer". "<br>model = $model<br>mileage = $mileage<br>"; $number++; } the above code returns item number = localtion = title = price = manufacturer = model = mileage = item number = 230485306218 localtion = Coventry, Warwickshire title = 2001 LAND ROVER RANGE ROVER VOGUE AUTO GREEN price = 3635.07 manufacturer = Land Rover model = Range Rover mileage = 76000 item number = 220617293997 localtion = Crawley, West Sussex title = 2004 CITROEN C5 HDI LX RED price = 3115.77 manufacturer = Citroen model = C5 mileage = 76000 item number = 180519294810 localtion = London, London title = 2000 VOLKSWAGEN POLO 1.4 SILVER 16V NEED GEAR BOX price = 905.06 manufacturer = Right-hand drive model = mileage = Standard Car item number = localtion = title = price = manufacturer = model = mileage = As you can see the information is not retrieved for a few items ... If I replace the $number manually like " $item_number = $getvalues-Item[4]-ItemID;" works well for any number .

    Read the article

  • Losing reference to $_post variable?

    - by Scott B
    In the code below, the echo at the top returns true, but the echo at the bottom returns nothing. Apparently the code in between is causing me to lose a reference to the $_post variable? <?php echo "in category: ".in_category('is-sidebar', $_post); //RETURNS TRUE if (!get_option('my_hide_recent')) { $cat=get_cat_ID('top-menu'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { ?> <div class="menu"><h4><?php if ($my_sidebar_heading_recent !=="") { echo $my_sidebar_heading_recent; } else { echo "Recent Posts";} ?></h4><ul> <?php global $post; $current_page_recent = get_post( $current_page ); $myrecentposts = get_posts(array('post_not_in' => get_option('sticky_posts'), 'cat' => "-$cat,-$catHidden",'showposts' => $my_recent_count)); foreach($myrecentposts as $idxrecent=>$post) { if($post->ID == $current_page_recent->ID) { $home_menu_recent = ' class="current_page_item'; } else { $home_menu_recent = ' class="page_item'; } $myclassrecent = ($idxrecent == count($myrecentposts) - 1 ? $home_menu_recent.' last"' : $home_menu_recent.'"'); ?> <li<?php echo $myclassrecent ?>><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></li> <?php } ; if (($myrecentpostscount > $my_recent_count) && $my_recent_count > -1){ ?><li><a href="<?php bloginfo('url'); ?>/site-map">View all</a></li><?php } ?></ul></div> <?php } } global $sitemap; echo "in category: ".in_category('is-sidebar', $_post); //RETURNS NOTHING

    Read the article

  • How do I add a column that displays the number of distinct rows to this query?

    - by Fake Code Monkey Rashid
    Hello good people! I don't know how to ask my question clearly so I'll just show you the money. To start with, here's a sample table: CREATE TABLE sandbox ( id integer NOT NULL, callsign text NOT NULL, this text NOT NULL, that text NOT NULL, "timestamp" timestamp with time zone DEFAULT now() NOT NULL ); CREATE SEQUENCE sandbox_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER SEQUENCE sandbox_id_seq OWNED BY sandbox.id; SELECT pg_catalog.setval('sandbox_id_seq', 14, true); ALTER TABLE sandbox ALTER COLUMN id SET DEFAULT nextval('sandbox_id_seq'::regclass); INSERT INTO sandbox VALUES (1, 'alpha', 'foo', 'qux', '2010-12-29 16:51:09.897579+00'); INSERT INTO sandbox VALUES (2, 'alpha', 'foo', 'qux', '2010-12-29 16:51:36.108867+00'); INSERT INTO sandbox VALUES (3, 'bravo', 'bar', 'quxx', '2010-12-29 16:52:36.370507+00'); INSERT INTO sandbox VALUES (4, 'bravo', 'foo', 'quxx', '2010-12-29 16:52:47.584663+00'); INSERT INTO sandbox VALUES (5, 'charlie', 'foo', 'corge', '2010-12-29 16:53:00.742356+00'); INSERT INTO sandbox VALUES (6, 'delta', 'foo', 'qux', '2010-12-29 16:53:10.884721+00'); INSERT INTO sandbox VALUES (7, 'alpha', 'foo', 'corge', '2010-12-29 16:53:21.242904+00'); INSERT INTO sandbox VALUES (8, 'alpha', 'bar', 'corge', '2010-12-29 16:54:33.318907+00'); INSERT INTO sandbox VALUES (9, 'alpha', 'baz', 'quxx', '2010-12-29 16:54:38.727095+00'); INSERT INTO sandbox VALUES (10, 'alpha', 'bar', 'qux', '2010-12-29 16:54:46.237294+00'); INSERT INTO sandbox VALUES (11, 'alpha', 'baz', 'qux', '2010-12-29 16:54:53.891606+00'); INSERT INTO sandbox VALUES (12, 'alpha', 'baz', 'corge', '2010-12-29 16:55:39.596076+00'); INSERT INTO sandbox VALUES (13, 'alpha', 'baz', 'corge', '2010-12-29 16:55:44.834019+00'); INSERT INTO sandbox VALUES (14, 'alpha', 'foo', 'qux', '2010-12-29 16:55:52.848792+00'); ALTER TABLE ONLY sandbox ADD CONSTRAINT sandbox_pkey PRIMARY KEY (id); Here's the current SQL query I have: SELECT * FROM ( SELECT DISTINCT ON (this, that) id, this, that, timestamp FROM sandbox WHERE callsign = 'alpha' AND CAST(timestamp AS date) = '2010-12-29' ) playground ORDER BY timestamp DESC This is the result it gives me: id this that timestamp ----------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 13 baz corge 2010-12-29 16:55:44.834019+00 11 baz qux 2010-12-29 16:54:53.891606+00 10 bar qux 2010-12-29 16:54:46.237294+00 9 baz quxx 2010-12-29 16:54:38.727095+00 8 bar corge 2010-12-29 16:54:33.318907+00 7 foo corge 2010-12-29 16:53:21.242904+00 This is what I want to see: id this that timestamp count ------------------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 3 13 baz corge 2010-12-29 16:55:44.834019+00 2 11 baz qux 2010-12-29 16:54:53.891606+00 1 10 bar qux 2010-12-29 16:54:46.237294+00 1 9 baz quxx 2010-12-29 16:54:38.727095+00 1 8 bar corge 2010-12-29 16:54:33.318907+00 1 7 foo corge 2010-12-29 16:53:21.242904+00 1 EDIT: I'm using PostgreSQL 9.0.* (if that helps any).

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • DataTable Filter mystery

    - by user283897
    Hi, could you please help me find the reason of the mystery I've found? In the below code, I create a DataTable and filter it. When I use filter1, everything works as expected. When I use filter2, everything works as expected only if the SubsectionAmount variable is less than 10. As soon as I set SubsectionAmount=10, the dr2 array returns Nothing. I can't find what is wrong. Here is the code: Imports System.Data Partial Class FilterTest Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Call FilterTable() End Sub Sub FilterTable() Dim dtSubsections As New DataTable Dim SectionID As Integer, SubsectionID As Integer Dim SubsectionAmount As Integer Dim filter1 As String, filter2 As String Dim rowID As Integer Dim dr1() As DataRow, dr2() As DataRow With dtSubsections .Columns.Add("Section") .Columns.Add("Subsection") .Columns.Add("FieldString") SectionID = 1 SubsectionAmount = 10 '9 For SubsectionID = 1 To SubsectionAmount .Rows.Add(SectionID, SubsectionID, "abcd" & CStr(SubsectionID)) Next SubsectionID For rowID = 0 To .Rows.Count - 1 Response.Write(.Rows(rowID).Item(0).ToString & " " _ & .Rows(rowID).Item(1).ToString & " " _ & .Rows(rowID).Item(2).ToString & "<BR>") Next SubsectionID = 1 filter1 = "Section=" & SectionID & " AND " & "Subsection=" & SubsectionID filter2 = "Section=" & SectionID & " AND " & "Subsection=" & SubsectionID + 1 dr1 = .Select(filter1) dr2 = .Select(filter2) Response.Write(dr1.Length & "<BR>") Response.Write(dr2.Length & "<BR>") If dr1.Length > 0 Then Response.Write(dr1(0).Item("FieldString").ToString & "<BR>") End If If dr2.Length > 0 Then Response.Write(dr2(0).Item("FieldString").ToString & "<BR>") End If End With End Sub End Class

    Read the article

  • random, Graphics point ,searching- algorithm, via dual for loop set

    - by LoneXcoder
    hello and thanks for joining me in my journey to the custom made algorithm for "guess where the pixel is" this for Loop set (over Point.X, Point.Y), is formed in consecutive/linear form: //Original\initial Location Point initPoint = new Point(150, 100); // No' of pixels to search left off X , and above Y int preXsrchDepth, preYsrchDepth; // No' of pixels to search to the right of X, And Above Y int postXsrchDepth, postYsrchDepth; preXsrchDepth = 10; // will start search at 10 pixels to the left from original X preYsrchDepth = 10; // will start search at 10 pixels above the original Y postXsrchDepth = 10; // will stop search at 10 pixels to the right from X postYsrchDepth = 10; // will stop search at 10 pixels below Y int StopXsearch = initPoint.X + postXsrchDepth; //stops X Loop itarations at initial pointX + depth requested to serch right of it int StopYsearch = initPoint.Y + postYsrchDepth; //stops Y Loop itarations at initial pointY + depth requested below original location int CountDownX, CountDownY; // Optional not requierd for loop but will reports the count down how many iterations left (unless break; triggerd ..uppon success) Point SearchFromPoint = Point.Empty; //the point will be used for (int StartX = initPoint.X - preXsrchDepth; StartX < StopXsearch; StartX++) { SearchFromPoint.X = StartX; for (int StartY = initPoint.Y - preYsrchDepth; StartY < StpY; StartY++) { CountDownX = (initPoint.X - StartX); CountDownY=(initPoint.Y - StartY); SearchFromPoint.Y = StartY; if (SearchSuccess) { same = true; AAdToAppLog("Search Report For: " + imgName + "Search Completed Successfully On Try " + CountDownX + ":" + CountDownY); break; } } } <-10 ---- -5--- -1 X +1--- +5---- +10 what i would like to do is try a way of instead is have a little more clever approach <+8---+5-- -8 -5 -- +2 +10 X -2 - -10 -8-- -6 ---1- -3 | +8 | -10 Y +1 -6 | | +9 .... I do know there's a wheel already invented in this field (even a full-trailer truck amount of wheels (: ) but as a new programmer, I really wanted to start of with a simple way and also related to my field of interest in my project. can anybody show an idea of his, he learnt along the way to Professionalism in algorithm /programming having tests to do on few approaches (kind'a random cleverness...) will absolutely make the day and perhaps help some others viewing this page in the future to come it will be much easier for me to understand if you could use as much as possible similar naming to variables i used or implenet your code example ...it will be Greatly appreciated if used with my code sample, unless my metod is a realy flavorless. p.s i think that(atleast as human being) the tricky part is when throwing inconsecutive numbers you loose track of what you didn't yet use, how do u take care of this too . thanks allot in advance looking forward to your participation !

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • How to limit results in a SharePoint XSL query

    - by David
    Hello all, I am creating a SharePoint site that we will use to report issues with trucks used in our business. Linked to the list I have created will be a page that will display an overview of the trucks and a little truck icon will show the trucks current status. Green and the truck is okay (no open issues), Red and the truck have an open issue with status "Undrivable", Orange and there is two issues open that requires the user to look further into the truck before using it and finally a Gray truck for when there is a new issue created that has not been looked into (not sure if it is drivable or not). I have managed to create the "Dashboard" and with my limit XSL/XPATH knowledge been able to add a truck and replicate the description above but... in my test I have created 4 issues, for example if three of them are changed to status Closed and one left to Undrivable I will get four icons on the page, three with Green trucks and the last one Red. So in theory it works but I obviously only want to see the last truck, one truck. I am not interested in seeing the others. <xsl:template name="dvt_1.rowview"> <xsl:variable name="CountReport" select="count(/dsQueryResponse/Rows/Row[@Highloader='GGEU12' and @Status!='Closed'])" /> <xsl:variable name="MoreThan" select="$CountReport &gt; 1" /> <xsl:variable name="NoReports" select="$CountReport = 0" /> <xsl:variable name="Closed" select=" @Highloader='GGEU12' and @Status='Closed'" /> <xsl:choose> <xsl:when test="$MoreThan"> <div class="ms-vb"><img title='More than one report exist!' border='0' alt='In Progress' src='highloader/Library/hl-orange.png' /></div> </xsl:when> <xsl:otherwise> <div class="ms-vb"><xsl:value-of disable-output-escaping="yes" select="@Icon" /></div> </xsl:otherwise> </xsl:choose> </xsl:template> My hope is that someone with slightly more knowledge can find the last piece of the puzzle for me! Thanks for reading and asking questions to fill any gap I left above. David

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • Objects not loading on second request in Restkit

    - by Holger Edward Wardlow Sindbæk
    I'm sending off 2 requests simultaneously with Restkit and I receive a response back from both of them, but only one of the requests receives any objects. If I send them off one by one, then my objectloader receives all objects. First request: self.firstObjectManager = [RKObjectManager sharedManager]; [self.firstObjectManager loadObjectsAtResourcePath:[NSString stringWithFormat: @"/%@.json", subUrl] usingBlock:^(RKObjectLoader *loader){ [loader.mappingProvider setObjectMapping:designArrayMapping forKeyPath:@""]; loader.userData = @"design"; loader.delegate = self; [loader sendAsynchronously]; }]; Second request: self.secondObjectManager = [RKObjectManager sharedManager]; [self.secondObjectManager loadObjectsAtResourcePath:[NSString stringWithFormat: @"/%@.json", subUrl] usingBlock:^(RKObjectLoader *loader){ [loader.mappingProvider setObjectMapping:designerMapping forKeyPath:@""]; loader.userData = @"designer"; loader.delegate = self; [loader sendAsynchronously]; }]; My objecloader: -(void)objectLoader:(RKObjectLoader *)objectLoader didLoadObjects:(NSArray *)objects { //NSLog(@"This happened: %@", objectLoader.userData); if (objectLoader.userData == @"design") { NSLog(@"Design happened: %i", objects.count); }else if(objectLoader.userData == @"designer"){ NSLog(@"designer: %@", [objects objectAtIndex:0]); } } My response: 2012-11-18 14:36:19.607 RestKitTest5[14220:c07] Started loading of request: designer 2012-11-18 14:36:22.575 RestKitTest5[14220:c07] I restkit.network:RKRequest.m:680:-[RKRequest updateInternalCacheDate] Updating cache date for request <RKObjectLoader: 0x95c3160> to 2012-11-18 19:36:22 +0000 2012-11-18 14:36:22.576 RestKitTest5[14220:c07] response code: 200 2012-11-18 14:36:22.584 RestKitTest5[14220:c07] Design happened: 0 2012-11-18 14:36:22.603 RestKitTest5[14220:c07] I restkit.network:RKRequest.m:680:-[RKRequest updateInternalCacheDate] Updating cache date for request <RKObjectLoader: 0xa589b50> to 2012-11-18 19:36:22 +0000 2012-11-18 14:36:22.605 RestKitTest5[14220:c07] response code: 200 2012-11-18 14:36:22.606 RestKitTest5[14220:c07] designer: <DesignerData: 0xa269fc0> Update: Setting my base url RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; [RKObjectManager setSharedManager:[RKObjectManager managerWithBaseURL:baseURL]]; Solution Problem was that I used the shared manager for both object managers, so I ended up doing: RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; RKObjectManager *myObjectManager = [[RKObjectManager alloc] initWithBaseURL:baseURL]; self.firstObjectManager = myObjectManager; and: RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; RKObjectManager *myObjectManager = [[RKObjectManager alloc] initWithBaseURL:baseURL]; self.secondObjectManager = myObjectManager;

    Read the article

  • php page navigation by serial number

    - by ilnur777
    Can anyone help in this php page navigation script switch on counting normal serial number? In this script there is a var called "page_id" - I want this var to store the real page link by order like 0, 1, 2, 3, 4, 5 ... $records = 34; // total records $pagerecord = 10; // count records to display per page if($records<=$pagerecord) return; $imax = (int)($records/$pagerecord); if ($records%$pagerecord>0)$imax=$imax+1; if($activepage == ''){ $for_start=$imax; $activepage = $imax-1; } $next = $activepage - 1; if ($next<0){$next=0;} $prev = $activepage + 1; if ($prev>=$imax){$prev=$imax-1;} $end = 0; $start = $imax; if($activepage >= 0){ $for_start = $activepage + $rad + 1; if($for_start<$rad*2+1)$for_start = $rad*2+1; if($for_start>=$imax){ $for_start=$imax; } } if($activepage < $imax-1){ $str .= ' <a href="?domain='.$domain_name.'&page='.($start-1).'&page_id=xxx"><<< End</a> <a href="?domain='.$domain_name.'&page='.$prev.'&page_id=xxx">< Forward</a> '; } $meter = $rad*2+1; for($i=$for_start-1; $i>-1; $i--){ $meter--; $line = ''; if ($i>0)$line = ""; if($i<>$activepage){ $str .= "<a href='?domain=".$domain_name."&page=".$i."&page_id=xxx'>".($i)."</a> ".$line." "; } else { $str .= " <b class='current_page'>".($i)."</b> ".$line." "; } if($meter=='0'){ break; } } if($activepage > 0){ $str .= " <a href='?domain=".$domain_name."&page=".$next."&page_id=xxx'>Back ></a> <a href='?domain=".$domain_name."&page=".($end)."&page_id=xxx'>Start >>></a> "; } return $str; Really need help with this stuff! Thanks in advance!

    Read the article

  • Function Point Analysis -- a seriously over-estimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Set the Dropdown box as selected in Javascript

    - by Aruna
    I am using Javascript in my app. In my table, I have a column named industry which contains value like id 69 name :aruna username :aruna email :[email protected] password: bd09935e199eab3d48b306d181b4e3d1:i0ODTgVXbWyP1iFOV type : 3 industry: | Insurance | Analytics | EIS and Process Engineering actually this industry value is inserted from a dropdown box multi select.. now i am trying like on load to make my form as to contain these values where industry is dropdown box <select id="ind1" moslabel="Industry" onClick="industryfn();"mosreq="0" multiple="multiple" size="3" class="inputbox" name="industry1[]">'+ <option value="Banking and Financial Services">Banking and Financial Services</option> <option value="Insurance">Insurance</option> <option value="Telecom">Telecom</option> <option value="Government ">Government </option> <option value="Healthcare &amp; Life sciences">Healthcare & Life sciences</option> <option value="Energy">Energy</option> <option value="Retail &amp;Consumer products">Retail &Consumer products</option> <option value="Energy, resources &amp; utilities">Energy, resources & utilities</option> <option value="Travel and Hospitality">Travel and Hospitality</option> <option value="Manufacturing">Manufacturing</option> <option value="High Tech">High Tech</option> <option value="Media and Information Services">Media and Information Services</option> </select> How to keep the industry values(| Insurance | Analytics | EIS and Process Engineering ) as selected? EDIT: Window.onDomReady(function(){ user-get('industry'); $s=explode('|', $str) ? var selectedFields = new Array(); <?php for($i=1;$i<count($s);$i++){?> selectedFields.push("<?php echo $s[$i];?>"); <?php }?> for(i=1;i<selectedFields.length;i++) { var select=selectedFields[i]; for (var ii = 0; ii < document.getElementById('ind1').length; ii++) { var value=document.getElementById('ind1').options[ii].value; alert(value); alert(select); if(value==select) { document.getElementById('ind1').options[ii].selected=selected; }//If } //inner For }//outer For </script> i have tried the above the alert functions are working correctly. But the if loop didnt works correctly .. Why so ..Please help me....

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Chaining animations and memory management

    - by bryan1967
    Hey Everyone, Got a question. I have a subclassed UIView that is acting as my background where I am scrolling the ground. The code is working really nice and according to the Instrumentation, I am not leaking nor is my created and still living Object allocation growing. I have discovered else where in my application that adding an animation to a UIImageView that is owned by my subclassed UIView seems to bump up my retain count and removing all animations when I am done drops it back down. My question is this, when you add an animation to a layer with a key, I am assuming that if there is already a used animation in that entry position in the backing dictionary that it is released and goes into the autorelease pool? For example: - (void)animationDidStop:(CAAnimation *)theAnimation finished:(BOOL)flag { NSString *keyValue = [theAnimation valueForKey:@"name"]; if ( [keyValue isEqual:@"step1"] && flag ) { groundImageView2.layer.position = endPos; CABasicAnimation *position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:midEndPos]; position.duration = (kGroundSpeed/3.8); position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step2-1" forKey:@"name"]; [groundImageView2.layer addAnimation:position forKey:@"positionAnimation"]; groundImageView1.layer.position = startPos; position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:midStartPos]; position.duration = (kGroundSpeed/3.8); position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step2-2" forKey:@"name"]; [groundImageView1.layer addAnimation:position forKey:@"positionAnimation"]; } else if ( [keyValue isEqual:@"step2-2"] && flag ) { groundImageView1.layer.position = midStartPos; CABasicAnimation *position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:endPos]; position.duration = 12; position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step1" forKey:@"name"]; [groundImageView1.layer addAnimation:position forKey:@"positionAnimation"]; } } This chains animations infinitely, and as I said one it is running the created and living object allocation doesn't change. I am assuming everytime I add an animation the one that exists in that key position is released. Just wondering I am correct. Also, I am relatively new to Core Animation. I tried to play around with re-using the animations but got a little impatient. Is it possible to reuse animations? Thanks! Bryan

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • php split array into smaller even arrays

    - by SoulieBaby
    I have a function that is supposed to split my array into smaller, evenly distributed arrays, however it seems to be duplicating my data along the way. If anyone can help me out that'd be great. Here's the original array: Array ( [0] => stdClass Object ( [bid] => 42 [name] => Ray White Mordialloc [imageurl] => sp_raywhite.gif [clickurl] => http://www.raywhite.com/ ) [1] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [2] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [3] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [4] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [5] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) [7] => stdClass Object ( [bid] => 34 [name] => Adrianos Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => ) [8] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [9] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [10] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) [11] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [12] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [13] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) [14] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [15] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [16] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) [17] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [18] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [19] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) [20] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [21] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [22] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [23] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [24] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [25] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) ) Here's the php function which is meant to split the array: function split_array($array, $slices) { $perGroup = floor(count($array) / $slices); $Remainder = count($array) % $slices ; $slicesArray = array(); $i = 0; while( $i < $slices ) { $slicesArray[$i] = array_slice($array, $i * $perGroup, $perGroup); $i++; } if ( $i == $slices ) { if ($Remainder > 0 && $Remainder < $slices) { $z = $i * $perGroup +1; $x = 0; while ($x < $Remainder) { $slicesRemainderArray = array_slice($array, $z, $Remainder+$x); $remainderItems = array_merge($slicesArray[$x],$slicesRemainderArray); $slicesArray[$x] = $remainderItems; $x++; $z++; } } }; return $slicesArray; } Here's the result of the split (it somehow duplicates items from the original array into the smaller arrays): Array ( [0] => Array ( [0] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [1] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [2] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [3] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [4] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [5] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [1] => Array ( [0] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [1] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [3] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [4] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [5] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [2] => Array ( [0] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [3] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [4] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [3] => Array ( [0] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [1] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) [2] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [3] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [4] => Array ( [0] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [1] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [2] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [5] => Array ( [0] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 34 [name] => Adriano's Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => # ) ) [6] => Array ( [0] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) ) [7] => Array ( [0] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) ) [8] => Array ( [0] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [1] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) ) [9] => Array ( [0] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [1] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) ) ) ^^ As you can see there are duplicates from the original array in the newly created smaller arrays. I thought I could remove the duplicates using a multi-dimensional remove duplicate function but that didn't work. I'm guessing my problem is in the array_split function. Any suggestions? :)

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >