Search Results

Search found 21235 results on 850 pages for 'no www'.

Page 29/850 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • jQuery Grouping Similar Items and Counting When Repeated

    - by NessDan
    So I have this structure setup: <ul> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> (Vid1) <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> (Vid2) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> (Vid3) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> </ul> Vid1 is repeated 3 times, Vid2 is repeated 3 times, and Vid3 is repeated 2 times. I want to put them into a structure where I can reference them like this: Vid1 - 3 (Repeated), http://www.youtube.com/get_video?video_id=dw1Vh9Yzryo&fmt=36 (Download) Vid2 - 3 (Repeated), http://www.youtube.com/get_video?video_id=bOF3o8B292U&fmt=36 (Download) Vid3 - 2 (Repeated), http://www.youtube.com/get_video?video_id=yAY4vNJd7A8&fmt=36 (Download) "This video was repeated " + [Vid1][Repeated] + " times and you can download it here: " + [Vid1][Download]; How can I set this structure up? I think I should be using an array to achieve the above but I'm not sure how I would set it up or how to reference certain things in the array. The other question is how can I get how many times something was repeated? The URL I have no problem with. Can anyone help me out?

    Read the article

  • jQuery Grouping Similar Items w/ Object Literal

    - by NessDan
    So I have this structure setup: <ul> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> (Vid1) <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> (Vid2) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> (Vid3) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> </ul> Vid1 is repeated 3 times, Vid2 is repeated 3 times, and Vid3 is repeated 2 times. I want to put them into a structure where I can reference them like this: youtube[0][repeated] = 3; youtube[0][download] = "http://www.youtube.com/get_video?video_id=dw1Vh9Yzryo&fmt=36" youtube[1][repeated] = 3; youtube[1][download] = "http://www.youtube.com/get_video?video_id=bOF3o8B292U&fmt=36" youtube[2][repeated] = 3; youtube[2][download] = "http://www.youtube.com/get_video?video_id=yAY4vNJd7A8&fmt=36" "This video was repeated " + youtube[0][repeated] + " times and you can download it here: " + youtube[0][download]; How can I set this multidimensional array up? Been Googling for hours and I don't know how to set it up. Can anyone help me out?

    Read the article

  • IIS7 URL Rewriting: How not to drop HTTPS protocol from rewritten URL?

    - by Scott Mitchell
    I'm working on a website that's using IIS 7's URL rewriting feature to do a permanent redirect from example.com to www.example.com, as well as rewrites from similar domain names to the "main" one, such as from www.examples.com to www.example.com. This rewrite rule - shown below - has worked well for sometime now. However, we recently added HTTPS support and noticed that if users visit one of the URLs to be rewritten to www.example.com then HTTPS is dropped. For instance, if a user visits https://example.com they get redirected to http://www.example.com, whereas we would like them to be sent to https://www.example.com. Here is the rewrite rule of interest (in Web.config): <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_HOST}" pattern="^example\.com$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.net$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.info$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?examples\.com$" /> </conditions> <action type="Redirect" url="http://www.example.com/{R:1}" redirectType="Permanent" /> </rule> As you can see, the action element's url attribute points directly to http://, so I get why https://example.com is redirected to http://www.example.com. My question is, how do I fix this? I tried (naively) to just drop the http:// part from the url attribute, but that didn't work. Thanks!

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • php split array into smaller even arrays

    - by SoulieBaby
    I have a function that is supposed to split my array into smaller, evenly distributed arrays, however it seems to be duplicating my data along the way. If anyone can help me out that'd be great. Here's the original array: Array ( [0] => stdClass Object ( [bid] => 42 [name] => Ray White Mordialloc [imageurl] => sp_raywhite.gif [clickurl] => http://www.raywhite.com/ ) [1] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [2] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [3] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [4] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [5] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) [7] => stdClass Object ( [bid] => 34 [name] => Adrianos Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => ) [8] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [9] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [10] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) [11] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [12] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [13] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) [14] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [15] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [16] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) [17] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [18] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [19] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) [20] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [21] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [22] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [23] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [24] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [25] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) ) Here's the php function which is meant to split the array: function split_array($array, $slices) { $perGroup = floor(count($array) / $slices); $Remainder = count($array) % $slices ; $slicesArray = array(); $i = 0; while( $i < $slices ) { $slicesArray[$i] = array_slice($array, $i * $perGroup, $perGroup); $i++; } if ( $i == $slices ) { if ($Remainder > 0 && $Remainder < $slices) { $z = $i * $perGroup +1; $x = 0; while ($x < $Remainder) { $slicesRemainderArray = array_slice($array, $z, $Remainder+$x); $remainderItems = array_merge($slicesArray[$x],$slicesRemainderArray); $slicesArray[$x] = $remainderItems; $x++; $z++; } } }; return $slicesArray; } Here's the result of the split (it somehow duplicates items from the original array into the smaller arrays): Array ( [0] => Array ( [0] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [1] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [2] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [3] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [4] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [5] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [1] => Array ( [0] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [1] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [3] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [4] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [5] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [2] => Array ( [0] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [3] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [4] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [3] => Array ( [0] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [1] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) [2] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [3] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [4] => Array ( [0] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [1] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [2] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [5] => Array ( [0] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 34 [name] => Adriano's Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => # ) ) [6] => Array ( [0] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) ) [7] => Array ( [0] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) ) [8] => Array ( [0] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [1] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) ) [9] => Array ( [0] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [1] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) ) ) ^^ As you can see there are duplicates from the original array in the newly created smaller arrays. I thought I could remove the duplicates using a multi-dimensional remove duplicate function but that didn't work. I'm guessing my problem is in the array_split function. Any suggestions? :)

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • Mapping an amazon server to a domain name registered with name.com

    - by S4M
    I have an amazon S3 web server and a domain name registered in name.com (the name is sam-experiments.com). I am trying to have a static page hosted on the amazon web server to be displayed on http://www.sam-experiments.com On the web server side, my bucket name is 'www.sam-experiments.com', and it links to here: http://www.sam-experiments.com.s3-website-eu-west-1.amazonaws.com/ On name.com, I added a new record with the followin characteristics: Record Type: CNAME Record Host: www.sam-experiments.com Record Answer: www.sam-experiments.com.s3.amazonaws.com. (as specified in the documentation here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs) TTL: 300 However, nothing gets displayed on www.sam-experiments.com, and I am not able to find what I am doing wrong. I really would appreciate some tip. Thanks! Note: I already posted this question in stackoverflow, but didnt get any answer, so I thought posting here may be more appropriate.

    Read the article

  • LinqToXML removing empty xmlns attributes &amp; adding attributes like xmlns:xsi, xsi:schemaLocation

    - by Rohit Gupta
    Suppose you need to generate the following XML: 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 2: xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" 3: xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 4: <PriceRecords> 5: <PriceRecord> 6: </PriceRecord> 7: </PriceRecords> 8: </GenevaLoader> Normally you would write the following C# code to accomplish this: 1: const string ns = "http://www.advent.com/SchemaRevLevel401/Geneva"; 2: XNamespace xnsp = ns; 3: XNamespace xsi = XNamespace.Get("http://www.w3.org/2001/XMLSchema-instance"); 4:  5: XElement root = new XElement( xnsp + "GenevaLoader", 6: new XAttribute(XNamespace.Xmlns + "xsi", xsi.NamespaceName), 7: new XAttribute( xsi + "schemaLocation", "http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd")); 8:  9: XElement priceRecords = new XElement("PriceRecords"); 10: root.Add(priceRecords); 11:  12: for(int i = 0; i < 3; i++) 13: { 14: XElement price = new XElement("PriceRecord"); 15: priceRecords.Add(price); 16: } 17:  18: doc.Save("geneva.xml"); The problem with this approach is that it adds a additional empty xmlns arrtribute on the “PriceRecords” element, like so : 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 2: <PriceRecords xmlns=""> 3: <PriceRecord> 4: </PriceRecord> 5: </PriceRecords> 6: </GenevaLoader> The solution is to add the xmlns NameSpace in code to each child and grandchild elements of the root element like so : 1: XElement priceRecords = new XElement( xnsp + "PriceRecords"); 2: root.Add(priceRecords); 3:  4: for(int i = 0; i < 3; i++) 5: { 6: XElement price = new XElement(xnsp + "PriceRecord"); 7: priceRecords.Add(price); 8: }

    Read the article

  • siteground hosting forwarding

    - by Oleg Videnov
    I would like to ask (maybe simple) question for you. I have a website which is called let's say www.website1.com on different hosting provider(siteground) I have www.website2.com Now,www.website1.com is the old website and the boss wants .. IF someone clicks on www.website1.com/user/content/1, he/she should be redirected to www.website2.com/user/content/1 ,but the url should REMAIN www.website1.com/user/content/1 and the same thing for all the pages. If someone have an answer how to do it,it would be much appreciated. Thanks in advance! Oleg Videnov

    Read the article

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • Google De-Index many pages at once?

    - by Jakobud
    On one of our websites, Google has been indexing something it wasn't supposed to. We fixed the problem so it shouldn't happen anymore, but are interested in requesting that Google de-index these pages. The problem is that there are about 10,000 pages. They all look similar to this: http://www.mysite.com/file.php?o=34995 http://www.mysite.com/file.php?o=4566 http://www.mysite.com/file.php?o=223af http://www.mysite.com/file.php?o=6ga3h http://www.mysite.com/file.php?o=sfh45a etc... All the pages are file.php with get parameters like above. Is it possible to put in a de-index request like: http://www.mysite.com/file.php* so that Google removes all 10,000 pages?

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • Kinect hacked for augmented reality

    - by Kit Ong
    It seems Kinect has more potential than any other consoles based motion detection device given the number of hacks that are out there in the wild. http://uk.videogames.games.yahoo.com/blog/article/19744/kinect-as-youve-never-seen-it-before.html Direct links to youtube videos of Kinect hacks: http://www.youtube.com/watch?v=M-wLOfjVfVc?fs=1&hl=en_GB http://www.youtube.com/watch?v=eWmVrfjDCyw?fs=1&hl=en_GB http://www.youtube.com/watch?v=P3gfMXwQOGI?fs=1&hl=en_GB http://www.youtube.com/watch?v=4qhXQ_1CQjg?fs=1&hl=en_GB http://www.youtube.com/watch?v=VgLp-KyK5g8?fs=1&hl=en_GB http://www.youtube.com/watch?v=CeQwhujiWVk?fs=1&hl=en_GB

    Read the article

  • slow DNS resolution

    - by Ehsan
    I have a DNS server that resolves all queries for an internal group of servers. It is a bind on CentOS 5.5 (same as RHEL5) and I have set it up to allow recursion and resolve direction without any forwarders. The problem I am facing is that it takes a freakishly long amount of time to resolve a name for the first time. (in the magnitudes of 20 sec) This causes clients to give timeout. When I set it to forward all to Google's public DNS, i.e. 8.8.8.8+8.8.4.4, it works very nicely (within a second). I tried monitoring the traffic on the net to see why it is doing this: [root@ns1 ~]# tcpdump -nnvvvA -s0 udp tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 23:06:36.137797 IP (tos 0x0, ttl 64, id 35903, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.10.36942 > 172.17.1.4.53: [udp sum ok] 19773+ A? www.paypal.com. (32) E..<[email protected]... .....N.5.(6.M=...........www.paypal.com..... 23:06:36.140594 IP (tos 0x0, ttl 64, id 56477, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.6128 > 192.35.51.30.53: [udp sum ok] 10105 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) E..G....@........#3....5.3fR'y...........www.paypal.com.......)........ 23:06:38.149756 IP (tos 0x0, ttl 64, id 13078, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.52425 > 192.54.112.30.53: [udp sum ok] 54892 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) [email protected]&.....6p....5.3.q.l...........www.paypal.com.......)........ 23:06:40.159725 IP (tos 0x0, ttl 64, id 43016, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.24059 > 192.42.93.30.53: [udp sum ok] 11205 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) E..G....@..@.....*].]..5.3..+............www.paypal.com.......)........ 23:06:41.141403 IP (tos 0x0, ttl 64, id 35904, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.10.36942 > 172.17.1.4.53: [udp sum ok] 19773+ A? www.paypal.com. (32) E..<.@..@..@... .....N.5.(6.M=...........www.paypal.com..... 23:06:42.169652 IP (tos 0x0, ttl 64, id 44001, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.4.9141 > 192.55.83.30.53: [udp sum ok] 1184 A? www.paypal.com. (32) E..<[email protected].#..5.(...............www.paypal.com..... 23:06:42.207295 IP (tos 0x0, ttl 54, id 38004, offset 0, flags [none], proto: UDP (17), length: 205) 192.55.83.30.53 > 172.17.1.4.9141: [udp sum ok] 1184- q: A? www.paypal.com. 0/3/3 ns: paypal.com. NS ns1.isc-sns.net., paypal.com. NS ns2.isc-sns.com., paypal.com. NS ns3.isc-sns.info. ar: ns1.isc-sns.net. AAAA 2001:470:1a::1, ns1.isc-sns.net. A 72.52.71.1, ns2.isc-sns.com. A 38.103.2.1 (177) E....t..6./A.7S......5#..................www.paypal.com..................ns1.isc-sns.net..............ns2.isc-sns...............ns3.isc-sns.info..,.......... ..p.............,..........H4G..I..........&g.. (this goes on for a few more seconds) If you look carefully, you will see that the first 3-4 root servers did not respond at all. This wastes 7-8 seconds, until one of them responded. Do you think I have setup something wrong here? Interestingly, when I dig directly from the root servers that did not respond, the always respond very fast (showing the firewall/nat is not the issue here). E.g. dig www.paypal.com @192.35.51.30 works perfectly, consistently, and very fast. What do you think about this mystery?

    Read the article

  • Two different websites in one remote hosting

    - by Kor
    My client asked me that a website that is hosted in one server (and pointing there through a domain) should also be accessed (into a specific directory) from another domain, which is not pointing there. For example: http://www.foo.com, hosted at GoDaddy, with the full website http://www.bar.com, hosted at Bluehost, needs to access http://www.foo.com/bar, as if it was the http://www.bar.com's root. So, if anybody enters through http://www.bar.com, it should internally load http://www.foo.com/bar, without visually changing the url. I amb not sure if this is possible using .htaccess or anything like this. Could anybody show me some light? Thanks in advance

    Read the article

  • One to many problem with implementing 301 redirect after changed urls

    - by user16136
    I have a problem. I had an old dynamic url which I have now split into multiple static urls. e.g. www.mydomain.com/product.php?type=1&id=2 www.mydomain.com/product.php?type=2&id=3 www.mydomain.com/product.php?type=2&id=4, etc which I have changed to something like www.mydomain.com/electronics/radio www.mydomain.com/electronics/television www.mydomain.com/mobile/smartphone, etc. Google has previously indexed the dynamic urls and search results show the old urls. I want search to point to the new urls. I have kept the old url active, so both urls work. How can I set up a 301 redirect in this case? I run IIS and it only allows a page to be redirected to 1 url. Should I deactivate the old dynamic url? In that case I lose all the previous seo rankings..

    Read the article

  • Webmin - Setting up multiple virtual hosts - Subdomains

    - by Aaron
    Can someone please help me in using WEBMIN to setup virtual hosts. My current domain www.MYDOMAINLOLFAKE.com currently functions. Settings are as follows - Apache - Handles the name-based server www.MYDOMAINLOLFAKE.com on all addresses Address Any Port 80 Server Name www.MYDOMAINLOLFAKE.com Document Root /var/www/html BIND DNS Server - Master Zone MYDOMAINLOLFAKE.com ns1.mydomainlolfake.com IPHERE - works ns2.mydomainlolfake.com IPHERE - works mydomainlolfake.com IPHERE - works www.mydomainlolfake.com IPHERE -works mail.mydomainlolfake.com IPHERE - works ftp.mydomainlolfake.com IPHERE - works What I need - something.mydomainlolfake.com -- CANT GET THIS TO WORK What I tried - Create new virtual host Handles the name-based server something.mydomainlolfake.com on something.mydomainlolfake.com Address Any Port 81 Document Root: /var/www/vhosts/something What happens - I create the new VHOST and then ALL address try to go to that new Document root. I need different addresses to go to their respective folders. Can someone please give me better instructions on how to set that up using webmin? TLDR# How do I make a something.mydomainlolfake.com subdomain work in webmin on my CENTOS 6 web server?

    Read the article

  • Generating AdSense ad impression?

    - by Danny
    I owned two website. Let's say as example www.first.com and www.second.com. I have two apps in Google chrome-store whose app URL is: app1 URL = www.first.com/currency_converter.php app2 URL = www.first.com/currency_converter.php respectively. Currently if user lunch/click on first app URL then I am forwarding them to my first website homepage (www.first.com), where I have AdSense code with currency converter app. In this way I am generating Google ad impression and traffic for my first website (i.e www.first.com). So is this valid way? Does I am violating any Google AdSense rule? Please provide your reply or suggestions. P.S.: Please comment, if my question is not clear, I will try to write it in some other way.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >