Search Results

Search found 3828 results on 154 pages for 'mathematical optimization'.

Page 42/154 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Big O, how do you calculate/approximate it?

    - by Sven
    Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying to solve lays in you can figure out if it is still possible to squeeze out that little extra performance.* But I'm curious, how do you calculate or approximate the complexity of your algorithms? *: but as they say, don't overdo it, premature optimization is the root of all evil, and optimization without a justified cause should deserve that name as well.

    Read the article

  • how to see the optimized code in c

    - by sganesh
    I can examine the optimization using profiler, size of the executable file and time to take for the execution. I can get the result of the optimization. But I have these questions, How to get the optimized C code. Which algorithm or method used by C to optimize a code. Thanks in advance.

    Read the article

  • Does Your Business Really Need a Search Engine Optimization Specialist?

    If you're an individual or an organization planning to employ a search engine optimization professional, you might first want to consider the possibility of doing this work yourself. Certainly, a search engine optimization consultant can boost your site and perhaps help save you time and effort, yet the fact is that SEO is hardly an advanced science, and there are just a handful of authentic professionals within the field. So, if you depend on somebody else to be in charge of your search engine optimization, there's a high probability that the person will not possess a great deal more knowledge than you. And if you lack any knowledge, all the relevant information is easy to obtain.

    Read the article

  • What's a good library for parsing mathematical expressions in java?

    - by CSharperWithJava
    I'm an Android Developer and as part of my next app I will need to evaluate a large variety of user created mathematical expressions and equations. I am looking for a good java library that is lightweight and can evaluate mathematical expressions using user defined variables and constants, trig and exponential functions, etc. I've looked around and Jep seems to be popular, but I would like to hear more suggestions, especially from people who have used these libraries before.

    Read the article

  • Performance problems when running Java desktop applications on Citrix Metaframe

    - by demetriusnunes
    We have a desktop Java application running within a Citrix Metaframe server farm and the performance, specially while starting up the app, is very unreliable. Sometimes it takes 15 seconds and sometimes it takes over a minute. It's really unpredicatable. Is there any way to optimize running Java desktop applications within Citrix Metaframe Terminal server sessions to a more reliable performance level? Are there any optimization directed specifically toward Java, such as pre-load JVMs or something like that? Any help would be greatly appreciated.

    Read the article

  • Options to optimize lotus notes 8.5.X

    - by Jakub
    Has anyone found actually useful optimization methods for the bulky, fat, eclipse giant, nuissance that is Lotus Notes 8.5? I want it to be fast, and not eat up system resources like crazy while I run it ALL day (as it is my company's corporate mail / cal / scheduling solution). I've tried various hacks for the JVM heap size (if I recall correctly). None really bring a performance improvement. I have a dual core cpu, if that helps (I tried going the route of optimizing JAVA for 2 cores in hopes it would work, but seen no speed improvement). Notes is just sooo bloated, anyone have any suggestions to optimize/mod this thing so it is more responsive and less of a resource hog. Note: I don't want to switch to the web version, or the standard stripped down versions, I am aware of those, I just cannot since we don't run those internally for the company.

    Read the article

  • Fusion 3.1 and Parallels 6 for Win7 x64

    - by Ronnie
    I read in a recent article that Parallels Desktop 6 is faster almost everywhere than VMware Fusion. I was originally using Parallels 4 before passing to VMware due to the frequent Parallels crashes. As now I am using a lot Fusion on my Macbook on a big Win7 x64 virtualized development machine that I find too slow I am wondering if the announced speed up of Parallels V6 is justified to come back to it. As a test I converted my Fusion 3.1 to a trial of Parallels Desktop 6 and my Windows Experience Index passed from 4.7 of Fusion to 4.5 on Parallels 6 so apparently the virtualized machine is not seeing that speed benefit. Is there any optimization to set up on Parallels to increase the WEI or should I stay with Fusion (and in this case this kind of articles is just marketing stuff)?

    Read the article

  • Does anyone still use Iometer?

    - by Brian T Hannan
    "Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It is used as a benchmark and troubleshooting tool and is easily configured to replicate the behaviour of many popular applications." link text Does anyone still use this tool? It seems helpful, but I'm not sure if it's for the thing I am trying to work on. I am trying create a benchmark computer performance test that can be run before and after a Windows Optimization program does its stuff (ex: PC Optimizer Pro or CCleaner). I want to be able to make a quick statement like CCleaner makes the computer run 50% faster or something along those lines. Are there any newer tools like this one?

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • .NET Runtime Optimization Service

    - by Velika
    I see that the Service ".NET Runtime Optimization Service v2.0.50727_X86" is disabled C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorsvw.exe I guess I probably did that, not sure. Do I need it/Should it be running?

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Using optimization to assign by preference

    - by Aarthi
    I have 100 objects ("candies") that I need to distribute between five people so that each has an equal number of candies (in this case, 20 candies per person). However, each person has also expressed their preferences of candy to me in a chart, similar to below. Top-favored candies receive 10 points, least-favored candies receive -10 points, and neutral-favored candies receive 0.5 points. I need to sort the items out so that: Each person receives the same number of candies Each person's total "satisfaction" (points) is maximized My output is a list of each person's assigned items I'm familiar with Excel's in-house Monte Carlo simulation tools (Solver, F9 diceroll, etc) and would like to stick to those tools. While I know how to set up the chart, and how to use the column summation to input into Solver, I don't know how to get it to give me the desired output. Furthermore, how do I adjust the solver so it takes into account individual preferences rather than empirical ones? To wit: how do I begin setting up this model?

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • mysql inserts & updates optimized

    - by user271619
    This is an optimization question, mostly. I have many forms on my sites that do simple Inserts and Updates. (Nothing complicated) But, several of the form's input fields are not necessary and may be left empty. (again, nothing complicated) However, my SQL query will have all columns in the Statement. My question, is it best to optimize the Inserts/Update queries appropriately? And only apply the columns that are changed into the query? We all hear that we shouldn't use the "SELECT *" query, unless it's absolutely needed for displaying all columns. But what about Inserts & Updates? Hope this makes sense. I'm sure any amount of optimization is acceptable. But I never really hear about this, specifically, from anyone.

    Read the article

  • Windows 7 : mise à jour du Microsoft Desktop Optimization Pack, le pack de virtualisation et de déploiement

    Windows 7 : mise à jour de Microsoft Desktop Optimization Pack Le pack de virtualisation et de déploiement Microsoft vient de procéder à une mise à jour de son pack de solutions de déploiement et de virtualisation Microsoft Destop Optimization Pack (MDOP) La mise à jour de MDOP porte principalement sur MED-V (Microsoft Enterprise Desktop Virtualisation) qui est désormais disponible en version 2.0 et sur APP-V 4 dont le Service Pack 1 est désormais disponible. Le SP1 de Microsoft APP-V 4 rend le processus de virtualisation des applications plus facile et plus rapide grâce à l'intégration du « package ...

    Read the article

  • Tips For Choosing Best Search Engine Optimization Services For Your Company!

    With the profusion of companies offering SEO optimization services out there, it is therefore important that you know what to look out for in making a choice as to which company to choose. In making a choice you have to know the salient elements that make an ideal company that offers SEO optimization services, armed with this information you will be better equipped in making the right choice for your organization.

    Read the article

  • How Does a Landing Page Affect Search Engine Optimization?

    Search engine optimization or SEO for short is one of the many those new to online commercial have to deal with when setting up their web presence. One SEO related question that seems to come up frequently is "How does a landing page affect search engine optimization, and if so to what degree"? The short answer is, yes it does.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >