Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 383/672 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Why is the software world full of status codes?

    - by David V McKay
    Why did programmers ever start using status codes? I mean, I guess I could imagine this might be useful back in the days when a text string was an expensive resource. WAYYY back then. But even after we had megabytes of memory to work with, we continued to use them. What possible advantage could there be for obfuscating the meaning of an error message or status message behind a status code?

    Read the article

  • Dropping PendingIntents

    - by Jeremy Edwards
    Is it ok to drop PendingIntents in android if they are never used. Such as in an AppWidgetProvider where a PendingIntent that was never used be overwritten by a new PendingIntent. Or should we call cancel on all unused PendingIntents to clean up memory appropriately?

    Read the article

  • Summary of the last decade of garbage collection?

    - by Ben Karel
    I've been reading through the Jones & Lin book on garbage collection, which was published in 1996. Obviously, the computing world has changed dramatically since then: multicore, out-of-order chips with large caches, and even larger main memory in desktops. The world has also more-or-less settled on the x86 and ARM microarchitectures for most consumer-facing systems. How has the field of garbage collection changed since the seminal book was published?

    Read the article

  • lightweight publish/subscribe framework in java

    - by mdma
    Is there a good lightweight framework for java that provides the publish/subscribe pattern? Some ideal features Support for generics Registration of multiple subscribers to a publisher API primarily interfaces and some useful implementations purely in-memory, persistence and transaction guarantees not required. I know about JMS but that is overkill for my need. The publish/subscribed data are the result of scans of a file system, with scan results being fed to another component for processing, which are then processed before being fed to another and so on.

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Is Dell's server software bundle necessary? (Poweredge 2950 in my case)

    - by bwerks
    Hi all, Dell includes a fair amount of software with its servers, but I'm having a hard time determining from the documentation what each of them does, and whether or not I should install it. Dell's support site (unless I'm doing it wrong) seems fairly opaque to me and its offerings fairly unstandardized in terms of their usage, so if possible I'd like to stray away from them. Specifically, I'm curious if any of the features offered are duplicated in something like Microsoft System Center. For additional background information, I'm working with a Poweredge 2950 that was just rebuilt with an expanded raid-6, but initially I just installed Server 2008 R2 directly instead of using the Build and Update utility. There's nothing of use on it at the moment so I'm totally open to wiping it again.

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Why do my download speeds drastically vary during a download?

    - by J. Anthony Carter
    I watch the download speed rise and fall like waves in a storm. At night, during low bandwidth usage I have achieve speeds as high as 3.23 M/sec but the watch them decline to 250 K/sec. and then climb back up. Over and over. During the day my best is around 1.67 M/sec with lows into the 65 K/sec. On top of this, why does a download need to slow down when approaching the end of the download? It's not like a multi-hundred ton train needing to decrease speed as it approaches the station.

    Read the article

  • AWS Elastic load balancer doesn't decrease instances from Alarm Trigger

    - by jchysk
    I have a load balancer that I created an auto-scaling-group and launch-config for. I created the auto-scaling-group with a min-size of 1 and max size of 20. I have a scaledown policy: as-put-scaling-policy SBMScaleDownPolicy --auto-scaling-group SBMAutoScaleGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300 Then I set up an alarm: mon-put-metric-alarm SBMLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 35 --alarm-actions arn:aws:autoscaling:us-east-1:policystuffhere:autoScalingGroupName/SBMAutoScaleGroup:policyName/SBMScaleDownPolicy --dimensions "AutoScalingGroupName=SBMAutoScaleGroup" When average CPU usage over 10 minutes is under 35, in CloudFront the alarm shows up as "In Alarm State" but doesn't decrease the number of instances. Also, if there's only one instance running it'll spin up another to 2 even if a scale up alarm isn't hit. It seems like the default value is just set to 2 somehow. How can I change this?

    Read the article

  • Cutting and pasting in MS Word: hourglass pops and it takes longer than expected

    - by Rax Olgud
    I work with MS Word 2007. Today I created a new document, and for some reason cutting and pasting text (using Ctrl-X and Ctrl-V) takes longer than expected. To clarify, here's the process: I select a single word in the document I click Ctrl-X The hourglass shows up for 1-2 seconds The word is cut The same happens for pasting (i.e. 1-2 seconds of hourglass). This document is ~5 pages long, with nothing fancy. I have plenty of available RAM and my CPU usage is around 1-2%, there's not peak during the cut/paste. Any thoughts on what can cause this and what I can do against it?

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • Is It Possible to get a (rough) Mobile Phone Location from a HTTP Request

    - by Tim Lytle
    If memory serves me correctly, google does this for the maps site. I know google's mobile maps app can determine the rough location (I assume using some kind of cell tower lookup), yet I seem to remember the site getting somewhat close to the current location when viewing on a mobile browser. Anyone know how/if that's possible? Does the IP address change based on the tower or area (seems like they'd be using some kind of gateway common to the carrier)?

    Read the article

  • how to create partition on windows CE device

    - by mack369
    Is there any tool to create a new partition on windows CE device? Device has a NAND flash memory and initially there were two partitions. Using Storage manager in Control Panel I was able to delete one partition but when I want to create it again, I get an error message: "Unable to create partition".

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • Streaming media from linux server - low footprint is crucial

    - by Mike Haye
    I recently pre-ordered the Raspberry Pi. http://www.raspberrypi.org/faqs For those of you who don't know it, it's a machine with 256 mb ram and a 700 MHz processor for $35. I plan to run linux on an SD card on this machine and have it act as both a htpc, VPN and media server. In regard to the media server part, I need to find some linux software that has a small footprint, but allows me to stream media to other devices connected to the internet (preferably without having to install any additional software on the client machines) Also, I would love if the video could be compressed, so the data usage wouldn't be so big for the client machine (e.g. when I'm using my data plan on my smartphone ;) ) Thanks in advance for any answers :) Mike.

    Read the article

  • Microsoft Outlook tips and tricks for improving user experience?

    - by Roee Adler
    I'm one of those heavy Microsoft Outlook users, currently working on the 2007 version. God knows this tool is heavy and may impose problems. I wondered what the Super User crowd has to suggest in order to improve the usage experience. Several suggestions of my own: Always work in cached mode (Tools--Account Settings--Change--Use Cached Exchange Mode) Use Outlook's local archiving capabilities Use Outlook's RSS reader - it's simple and allows offline access to your feeds If you have e-mail subscriptions to magazines, blogs, etc. - create a subdirectory to keep them, and a rule to automatically move them there when they arrive (one rule per subscription, based on the sender e-mail.) You can also share suggestions that require configuration of Exchange Server, for those of us who can make bring them to their IT managers. What are your suggestions? PS: "Use Gmail" is not an accepted answer, some of us don't control what email system we use...

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • T-SQL query with date range

    - by Moo
    Hi, I have a fairly weird 'bug' with a simple query, and I vaguely remember reading the reason for it somewhere a long time ago but would love someone to refresh my memory. The table is a basic ID, Datetime table. The query is: select ID, Datetime from Table where Datetime <= '2010-03-31 23:59:59' The problem is that the query results include results where the Datetime is '2010-04-01 00:00:00'. The next day. Which it shouldn't. Anyone? Cheers Moo

    Read the article

  • Repair snapped USB flash drive

    - by Richard Slater
    I have a USB Flash Drive that has had the USB connector snapped away from the circuit board. In the past I have had great sucess with soldering the connector back to the circuit board with 4 solid core wires. Unfortunatly this particular device shows up as "Unknown Device" in device manager and displays 0ma power usage. Giving a closer look at the circuit board it appears that the Data + connector has come away from the PCB. Which would explain why it is not recognised. Is there any practicable way of lifting the data from the device? larger version

    Read the article

  • Solaris disk I/O spike... which monitoring tools do I need?

    - by bonez05
    Hi all, My Solaris 10 websever / database server disk io keeps spiking at intermittent times. Using iostat -xtc 5 the reads / sec will jump from 3.0 to 1450.0 and %busy will jump to 98% The apache access log doesn't indicate anything out of the ordinary. In other words, requests aren't any higher than usual. top doesn't generate anything useful. The cpu usage is fine with mysql using about 20% and nothing else to speak of really. What monitoring tool should I be using to see what process is using excessive disk I/O? or if there are any other suggestions I'm all ears. Thanks

    Read the article

  • SQL Server cluster performance baseline

    - by Dwight T
    Currently I'm tasked with getting a good performance baseline on a SQL 2005 cluster. The main db on the server is for Sharepoint, but I would like to add other dbs on the cluster. I do have access to Quest's Performance Analysis tool to help. What are key factors to look at to see if the cluster can handle additional dbs? Do you look at different performance indicators for a cluster vs a stand alone sql server? One db will be a low usage transactional db and a read only db that is used for sales data. Thanks Dwight

    Read the article

  • Is marshaling/ serialization in PHP as simple as serialize($var) ?

    - by Ygam
    here's a definition of marshaling from Wikipedia: In computer science, marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another. I have always done data serialization in php via its serialize function, usually on objects or arrays. But how is wikipedia's definition of marshaling/serialization takes place in this serizalize() function?

    Read the article

  • How can user change the jre parameter values after the exe is generated in Launch4j?

    - by Wing C. Chen
    Is it possible to change the jre parameter values after the exe file is generated through Launch4j? The ideal scenario is like this: The default parameter values are applied when the program is started. However, when the user wants to change some jre parameter values, he goes to a .ini file, MyProgram.ini for example, changes the values there, and the new values will be applied next time the program is started. I think eclipse uses the same way for its memory and some other parameter settings.

    Read the article

  • What's holding up my PHP script?

    - by gAMBOOKa
    We've got a PHP crawler running on our web server. When the crawler is running, there are no cpu, memory or network bandwidth spikes. Everything is normal. But our website (also PHP), hosted on the same server, stops responding. Basically the crawler blocks any other php script from running. What could be the problem? EDIT: ** fsockopen is being used to download files to crawler! **

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >