Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 106/457 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • create a sparse BufferedImage in java

    - by elgcom
    I have to create an image with very large resolution, but the image is relatively "sparse", only some areas in the image need to draw. For example with following code /* this take 5GB memory */ final BufferedImage img = new BufferedImage( 36000, 36000, BufferedImage.TYPE_INT_ARGB); /* draw something */ Graphics g = img.getGraphics(); g.drawImage(....); /* output as PNG */ final File out = new File("out.png"); ImageIO.write(img, "png", out); The PNG image on the end I created is ONLY about 200~300 MB. The question is how can I avoid creating a 5GB BufferedImage at the beginning? I do need an image with large dimension, but with very sparse color information. Is there any Stream for BufferedImage so that it will not take so much memory?

    Read the article

  • how can I send messages to/from a Websphere Message Broker from an embedded C client (no JVM)?

    - by queBurro
    What are my options for pubsubing (or point to point but pubsub is better) messages to and from an IBM message broker from an embedded headless C/C++ linux client that doesn't have a JVM? Ideally we want large file transfer (2GB once per day off of the client) encryption (SSL) reliable ('assured' delivery / QoS2, maybe QoS1 would do) The client in question currently only has exes and some bash scripts, I've been playing with MQTTv3 and RSMB, but for that I'd have to chomp the large files up (and reassemble back home) and I don't want to get into that if there's a transport that will do this for me? I've looked at MQTTv5 (but our client's got no JVM); JMS (no JVM) and XMS? which again looks like it gives me a C API but then needs the JVM to be installed on the client (or am I wrong?) any clues or hints would be appreciated, cheers

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Fastest way to zero out a 2d array in C?

    - by Eddy
    I want to repeatedly zero a large 2d array in C. This is what I do at the moment: for(j = 0; j < n; j++) { for(i = 0; i < n; i++) { array[i][j] = 0; } } I've tried using memset: memset(array, 0, sizeof(array)) But this only works for 1D arrays. When I printf the contents of the 2D array, the first row is zeroes, but then I got a load of random large numbers and it crashes.

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Passing objects by reference or not in C#

    - by Piku
    Suppose I have a class like this: public class ThingManager { List<SomeClass> ItemList; public void AddToList (SomeClass Item) { ItemList.Add(Item); } public void ProcessListItems() { // go through list one item at a time, get item from list, // modify item according to class' purpose } } Assume "SomeClass" is a fairly large class containing methods and members that are quite complex (List<s and arrays, for example) and that there may be a large quantity of them, so not copying vast amounts of data around the program is important. Should the "AddToList" method have "ref" in it or not? And why? It's like trying to learn pointers in C all over again ;-) (which is probably why I am getting confused, I'm trying to relate these to pointers. In C it'd be "SomeClass *Item" and a list of "SomeClass *" variables)

    Read the article

  • Porting a PowerBuilder Application to .NET

    - by Justin Ethier
    Does anyone have any advice for migrating a PowerBuilder 10 business application to .NET? My company is considering migrating a legacy PB application to .NET (C#) and I am just wondering if anyone has any experience - good or bad - that you would like to share. The application is rather large with 10 PBL libraries, some PFC as well as custom frameworks. There are a large number of DLL calls being made as well. Finally, it uses a Microsoft SQL Server database. We have discussed porting the "core" application code to .NET and then porting more advanced functionality across as-needed.

    Read the article

  • who wrote 250k tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you. update chrisw thinks there are only 20k tests (check out his explanation below). PS I'd really like to hear from folks who have worked on projects with large test bases

    Read the article

  • Google Translation API not working for even one page long documents

    - by Saubhagya
    I'm using Google Translation API to translate text from Chinese Simplified to English in my C# program. The problem is if the text is small (around one line) the API is able to translate it, but if the text is larger (more than 3 lines) is gives an exception saying "The remote server returned an unexpected response: (414) Request-URI Too Large.". However if I use translate.google.com in my browser that works fine. Please tell me how can I process large documents using Google Translate API in my desktop application written in C#.

    Read the article

  • How do you convert a parent-child (adjacency) table to a nested set using PHP and MySQL?

    - by mrbinky3000
    I've spent the last few hours trying to find the solution to this question online. I've found plenty of examples on how to convert from nested set to adjacency... but few that go the other way around. The examples I have found either don't work or use MySQL procedures. Unfortunately, I can't use procedures for this project. I need a pure PHP solution. I have a table that uses the adjacency model below: id parent_id category 1 0 ROOT_NODE 2 1 Books 3 1 CD's 4 1 Magazines 5 2 Books/Hardcover 6 2 Books/Large Format 7 4 Magazines/Vintage And I would like to convert it to a Nested Set table below: id left right category 1 1 14 Root Node 2 2 7 Books 3 3 4 Books/Hardcover 4 5 6 Books/Large Format 5 8 9 CD's 6 10 13 Magazines 7 11 12 Magazines/Vintage Here is an image of what I need: I have a function, based on the pseudo code from this forum post (http://www.sitepoint.com/forums/showthread.php?t=320444) but it doesn't work. I get multiple rows that have the same value for left. This should not happen. <?php /** -- -- Table structure for table `adjacent_table` -- CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'ROOT'), (2, 1, 'Books'), (3, 1, 'CD''s'), (4, 1, 'Magazines'), (5, 2, 'Hard Cover'), (6, 2, 'Large Format'), (7, 4, 'Vintage'); -- -- Table structure for table `nested_table` -- CREATE TABLE IF NOT EXISTS `nested_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `lft` int(11) DEFAULT NULL, `rgt` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; */ mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); adjacent_to_nested(0); /** * adjacent_to_nested * * Reads a "adjacent model" table and converts it to a "Nested Set" table. * @param integer $i_id Should be the id of the "root node" in the adjacent table; * @param integer $i_left Should only be used on recursive calls. Holds the current value for lft */ function adjacent_to_nested($i_id, $i_left = 0) { // the right value of this node is the left value + 1 $i_right = $i_left + 1; // get all children of this node $a_children = get_source_children($i_id); foreach ($a_children as $a) { // recursive execution of this function for each child of this node // $i_right is the current right value, which is incremented by the // import_from_dc_link_category method $i_right = adjacent_to_nested($a['id'], $i_right); // insert stuff into the our new "Nested Sets" table $s_query = " INSERT INTO `nested_table` (`id`, `lft`, `rgt`, `category`) VALUES( NULL, '".$i_left."', '".$i_right."', '".mysql_real_escape_string($a['category'])."' ) "; if (!mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } echo "<p>$s_query</p>\n"; // get the newly created row id $i_new_nested_id = mysql_insert_id(); } return $i_right + 1; } /** * get_source_children * * Examines the "adjacent" table and finds all the immediate children of a node * @param integer $i_id The unique id for a node in the adjacent_table table * @return array Returns an array of results or an empty array if no results. */ function get_source_children($i_id) { $a_return = array(); $s_query = "SELECT * FROM `adjacent_table` WHERE `father_id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } if (mysql_num_rows($i_result) > 0) { while($a = mysql_fetch_assoc($i_result)) { $a_return[] = $a; } } return $a_return; } ?> This is the output of the above script. INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '5', 'Hard Cover' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '2', '7', 'Large Format' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '8', 'Books' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '10', 'CD\'s' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '10', '13', 'Vintage' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '1', '14', 'Magazines' ) INSERT INTO nested_table (id, lft, rgt, category) VALUES( NULL, '0', '15', 'ROOT' ) As you can see, there are multiple rows sharing the lft value of "1" same goes for "2" In a nested-set, the values for left and right must be unique. Here is an example of how to manually number the left and right ID's in a nested set: UPDATE - PROBLEM SOLVED First off, I had mistakenly believed that the source table (the one in adjacent-lists format) needed to be altered to include a source node. This is not the case. Secondly, I found a cached page on BING (of all places) with a class that does the trick. I've altered it for PHP5 and converted the original author's mysql related bits to basic PHP. He was using some DB class. You can convert them to your own database abstraction class later if you want. Obviously, if your "source table" has other columns that you want to move to the nested set table, you will have to adjust the write method in the class below. Hopefully this will save someone else from the same problems in the future. <?php /** -- -- Table structure for table `adjacent_table` -- DROP TABLE IF EXISTS `adjacent_table`; CREATE TABLE IF NOT EXISTS `adjacent_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `father_id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; -- -- Dumping data for table `adjacent_table` -- INSERT INTO `adjacent_table` (`id`, `father_id`, `category`) VALUES (1, 0, 'Books'), (2, 0, 'CD''s'), (3, 0, 'Magazines'), (4, 1, 'Hard Cover'), (5, 1, 'Large Format'), (6, 3, 'Vintage'); -- -- Table structure for table `nested_table` -- DROP TABLE IF EXISTS `nested_table`; CREATE TABLE IF NOT EXISTS `nested_table` ( `lft` int(11) NOT NULL DEFAULT '0', `rgt` int(11) DEFAULT NULL, `id` int(11) DEFAULT NULL, `category` varchar(128) DEFAULT NULL, PRIMARY KEY (`lft`), UNIQUE KEY `id` (`id`), UNIQUE KEY `rgt` (`rgt`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; */ /** * @class tree_transformer * @author Paul Houle, Matthew Toledo * @created 2008-11-04 * @url http://gen5.info/q/2008/11/04/nested-sets-php-verb-objects-and-noun-objects/ */ class tree_transformer { private $i_count; private $a_link; public function __construct($a_link) { if(!is_array($a_link)) throw new Exception("First parameter should be an array. Instead, it was type '".gettype($a_link)."'"); $this->i_count = 1; $this->a_link= $a_link; } public function traverse($i_id) { $i_lft = $this->i_count; $this->i_count++; $a_kid = $this->get_children($i_id); if ($a_kid) { foreach($a_kid as $a_child) { $this->traverse($a_child); } } $i_rgt=$this->i_count; $this->i_count++; $this->write($i_lft,$i_rgt,$i_id); } private function get_children($i_id) { return $this->a_link[$i_id]; } private function write($i_lft,$i_rgt,$i_id) { // fetch the source column $s_query = "SELECT * FROM `adjacent_table` WHERE `id` = '".$i_id."'"; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } $a_source = array(); if (mysql_num_rows($i_result)) { $a_source = mysql_fetch_assoc($i_result); } // root node? label it unless already labeled in source table if (1 == $i_lft && empty($a_source['category'])) { $a_source['category'] = 'ROOT'; } // insert into the new nested tree table // use mysql_real_escape_string because one value "CD's" has a single ' $s_query = " INSERT INTO `nested_table` (`id`,`lft`,`rgt`,`category`) VALUES ( '".$i_id."', '".$i_lft."', '".$i_rgt."', '".mysql_real_escape_string($a_source['category'])."' ) "; if (!$i_result = mysql_query($s_query)) { echo "<pre>$s_query</pre>\n"; throw new Exception(mysql_error()); } else { // success: provide feedback echo "<p>$s_query</p>\n"; } } } mysql_connect('localhost','USER','PASSWORD') or die(mysql_error()); mysql_select_db('DATABASE') or die(mysql_error()); // build a complete copy of the adjacency table in ram $s_query = "SELECT `id`,`father_id` FROM `adjacent_table`"; $i_result = mysql_query($s_query); $a_rows = array(); while ($a_rows[] = mysql_fetch_assoc($i_result)); $a_link = array(); foreach($a_rows as $a_row) { $i_father_id = $a_row['father_id']; $i_child_id = $a_row['id']; if (!array_key_exists($i_father_id,$a_link)) { $a_link[$i_father_id]=array(); } $a_link[$i_father_id][]=$i_child_id; } $o_tree_transformer = new tree_transformer($a_link); $o_tree_transformer->traverse(0); ?>

    Read the article

  • Guide to MS/.NET/C# certifications?

    - by thr
    We've started looking at getting MS certifications at my job, trying to get a more even level of skill and knowledge in our teams. So I'm left wondering if there is any "roadmap" or guide from Microsoft that you can look at/follow to know what certifications are applicable to your organization, what the contain, cost, etc. ?

    Read the article

  • Is there a language designed for code golf?

    - by J S
    I am not really a fan of code golf, but I have to wonder, is there an esoteric language designed for it? I mean a language with following properties: Common programs may be expressed in very short amount of characters It uses ASCII character set effectively (for example, common operators are not identifiers, so they don't have to be separated by whitespace, character usage is distributed more or less evenly because we cannot use Huffman coding and so on) Except the terse syntax, it should have very expressible and clean semantics (like, let's say, Python or Scheme); it shouldn't be difficult to program in It doesn't need features for large scale programs, such as OOP, but it definitely should allow custom functions and data structures It should have a large standard library, identifiers in this library should be as short as possible Maybe it should be called CG? Languages that can be a source of inspiration are Forth, APL and Joy.

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • Can Crystal Reports Scale to Fit Page

    - by Jacob Reyes
    Hello , Can a crystal report be scaled to fit page? I'm hoping to achieve something similar to Microsoft Excel's Scale To Fit feature wherein a large spreadsheet can be scaled to fit a 8.5"x 11" page. (On MS Excel 2007 goto Page Layout Scale To Fit). Im searching of a way to make a large report fit into a smaller page during print. for example a report designed in Legal(8.5"x 14") page must be able to shrink when print previewed for Letter(8.5"x 11") page. In my crystal report, it should be scaled to fit the page by default. I was thinking maybe theres a Crystal Report Setting or C# code technique that I missed out. Any hint or link to the right direction is appreciated. Thanks!

    Read the article

  • Overlapping images w/ image maps obstructing each other

    - by Hamster
    Information: The images have large transparent sections, so each must be overlapped to create the needed effect. Specifically, the clickable portions of each image are in weird trapezoid shapes meant to be pressed up against each other. Images have image maps with large portions being overlapped by the transparent portions of other nearby (trapezoid) images. I don't expect any change in z indexes will solve this... Combining the image files into a larger single one to overlay a single image map for each section seems less than ideal, especially since I may need to re-order or rename them later and such. Never mind hover animations and other possibilities down the road. What would be the best workaround?

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • TeamCity build number independent artifacts

    - by Stanislav Shevchenko
    Hello, My TeamCity's nightly build produces more than 130Mb java doc as Build Configuration artifact. I have to share these artifact for other teams(developers), but javadoc every time has another one URL(I know that it's possible to get it like .lastFinished), and get's new place on Build Machine. Is it possible on each build replace old artifact with new one because I don't need need previous versions? And have independent from build version URL for accessing.

    Read the article

  • how to limit the foreignkey dropdown with constraints?

    - by FurtiveFelon
    Hi all, I have a database which keeps track of interaction between two different teams (represented in the admin interface by two different groups). For some fields, i have a foreignkey to Users database, and i would like to limit the dropdown people to only the specific groups. If anyone have any suggestions, it would be much appreciated! Jason

    Read the article

  • How do I delete the 32k errored document?

    - by Ramkumar
    I am having some documents. If I try to open the document then it shows error like "field is too large 32k or view's column & selection formulas are too large" Whenever I try to delete the document, I am getting the same error. I am not able to delete. Okay we can try to get the document via backend, But there, I can not get the document handle. Whatever I try to search then the document collection count is 0. Important:- I am using Notes 6.5.2. Thanks in Advance,

    Read the article

  • Log with timestamps that have millisecond accuracy & resolution in Windows C++

    - by Psychic
    I'm aware that for timing accuracy, functions like timeGetTime, timeBeginPeriod, QueryPerformanceCounter etc are great, giving both good resolution & accuracy, but only based on time-since-boot, with no direct link to clock time. However, I don't want to time events as such. I want to be able to produce an exact timestamp (local time) so that I can display it in a log file, eg 31-12-2010 12:38:35.345, for each entry made. (I need the millisecond accuracy) The standard Windows time functions, like GetLocalTime, whilst they give millisecond values, don't have millisecond resolution, depending on the OS running. I'm using XP, so I can't expect much better than about a 15ms resolution. What I need is a way to get the best of both worlds, without creating a large overhead to get the required output. Overly large methods/calculations would mean that the logger would start to eat up too much time during its operation. What would be the best/simplest way to do this?

    Read the article

  • Rails - Active Record :conditions overrides :select

    - by Nick
    I have a fairly large model and I want to retrieve only a select set of fields for each record in order to keep the JSON string I am building small. Using :select with find works great but my key goal is to use conditional logic with an associated model. Is the only way to do this really with a lamda in a named scope? I'm dreading that perhaps unnecessarily but I'd like to understand if there is a way to make the :select work with a condition. This works: @sites = Site.find :all, :select => 'id,foo,bar' When I try this: @sites = Site.find :all, :select => 'id,foo,bar', :include => [:relatedmodel], :conditions => ["relatedmodel.type in (?)", params[:filters]] The condition works but each record includes all of the Site attributes which makes my JSON string way way too large. Thanks for any pointers!

    Read the article

  • Love coding but offered a server/network job -- any advice?

    - by Pete
    I really enjoy software development. I've done it for going on 3 years now full-time for a small company and still find it interesting and exciting. I haven't had much server/network experience but have an opportunity to work for a large IT company dealing with server setups, configurations, maintenance and some networking work as well. The thing is, I'm not sure whether to accept. If I were to take this, it would have relatively little if any coding and I'm guessing would start me down a career path away from coding. The only thing is the company is large enough and has a coding division so I guess in a few years I could transition back to the software side of things if I wanted, but I'm just not sure whether I would enjoy the server/network side of things. Any advice is greatly appreciated. Especially if you have had a similar situation occur. Thanks!

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >