Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 72/1198 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Ubuntu 12.04 Samba File Server timeout on large file

    - by phileaton
    I am a beginner with servers. I checked the error logs for Samba and it appears that Samba is timing out when I transfer large files. I can successfully add pdfs for instance to my file server. However, I tried to add a large 1.2gb video file and it did not succeed. This is the error in the log: smbd/process.c:244(read_packet_remainder) read_fd_with_timeout failed for client 0.0.0.0 read error = NT_STATUS_CONNECT$ Is there a way I can stop it from timing out? Any pointers would be great! Thanks!

    Read the article

  • Moving MODx Files to Other MODx Website

    - by Austin
    I have one website with modx installed to www.website.com/modx/ --- Keep in mind there are other Websites In Progress on this storage server. My issue is that I'm moving all this: templates, template variables, chunks, snippets, etc to another server that already has modx installed in it. My first instinct is to go to phpMyAdmin and export the sql file and import them to the new website's server. However, an error occurred when I attempted to do this. It had found many duplicates in fields that were associated a PK (due to it being the same website just a redesign). I don't have to go and dump the table of the oldsite and then upload the new sql file do i? Please advise.

    Read the article

  • How do I make a google larger by scale

    - by Hultner
    I've just started using google charts and want to use it in a small project I'm workin on but I've bumped into a problem. The thing is I want the charts rather big at a static size and look good but I'm generating the charts dynamically with php. Now the problem is that I can't get it to scale properly in width but height is perfectly fine. Here's an example chart I've generated: Parameters: cht=bvo&chs=400x400 chd=t:1,4,1 chxr=2,0,4,1 chds=0,4 chco=4d89f9 chxt=x,x,y,y chxl=0:|3|7|26|1:|Correct+answers|3:|People You see how the chart fills the 400px of height but not the width. I've searched and look through the api but I can't get it right.

    Read the article

  • Windows - Website unaccessible only on windows pcs in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening? Update: All of the computers on the network running Windows (any version) are showing this problem now. The website is still working under linux and mac osx. So, it has to be related to some kind of windows update (although I haven't installed any on one computer in the past week, which I've set to do manual updates only)..

    Read the article

  • Large public datasets?

    - by Jason
    I am looking for some large public datasets, in particular: Large sample web server logs that have been anonymized. Datasets used for database performance benchmarking. Any other links to large public datasets would be appreciated. I already know about Amazon's public datasets at: http://aws.amazon.com/publicdatasets/

    Read the article

  • Quickly create large file on a windows system?

    - by Leigh Riffel
    In the same vein as http://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system I'd like to quickly create a large file on a windows system. By large I'm thinking 5GB. The content doesn't matter. A built in command or short batch file would be preferable, but I'll accept an application if there are no other easy ways.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • design a large scale network for an organization

    - by Essam
    hello.i am so new to networking i want to design a large scale network for an organization with HQ and two branches. i want to use class A address for that.my questions are: if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that(number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (lets say it is a university with two branches in two different states)?! that is all your help is highly appropriated.

    Read the article

  • Internal Website links only working for some users

    - by Ryan
    We are having a very hard time trying to figure out this problem or the root cause. Our website www. countrymusicislove .com is only correctly displaying the homepage for some users. Anytime they click on a post, about me, etc, a 404 error page is displayed. Everything worked fine before we moved over to a new hosting company 2 weeks ago. I am looking for any ideas and even willing to pay someone to troubleshoot and fix this issue as no one seems to have an answer. The entire website is done in the latest version of wordpress. The old website address for the website is http://siteground243.com/~countr10/ And the domain name was registered through google for enom.com Everything is now going through Arvixe.com On my work computer, I am able to get the 404 error to appear on other pages by turning on friendly error messages. When I turn off friendly error messages, everything seems to work. I have tried this several times and it doesn't seem like a coincidence.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • How to organize Enterprise scale Composite Applications (CAG)

    - by David
    All QuickStarts and RI examples in the CAG documentation are good but I lack the more Enterprise scale examples. Let's say we have 40+ modules, each containing a Proxy,Facade,PresentationModel,Model and Views. Each module also makes calls to a Module-specific WCF service which is to be hosted in IIS or in a stand-alone console host. Our approach have been to include the UI-module, service-module and related tests into one solution so they can be developed and tested separately from other modules. My problem is how the hosting of the services should be done when the services are in separate modules and how to actually run the separate module together with the rest of the application-modules when I press F5. Is there a best practise for this? I guess it has been done before?

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • What would you recommend for a large-scale Java data grid technology: Terracotta, GigaSpaces, Cohere

    - by cliff.meyers
    I've been reading up on so-called "data grid" solutions for the Java platform including Terracotta, GigaSpaces and Coherence. I was wondering if anyone has real-world experience working any of these tools and could share their experience. I'm also really curious to know what scale of deployment people have worked with: are we talking 2-4 node clusters or have you worked with anything significantly larger than that? I'm attracted to Terracotta because of its "drop in" support for Hibernate and Spring, both of which we use heavily. I also like the idea of how it decorates bytecode based on configuration and doesn't require you to program against a "grid API." I'm not aware of any advantages to tools which use the approach of an explicit API but would love to hear about them if they do in fact exist. :) I've also spent time reading about memcached but am more interested in hearing feedback on these three specific solutions. I would be curious to hear how they measure up against memcached in the event someone has used both.

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • Decimal data Type Display scale part as zero

    - by Wael Dalloul
    I have Decimal field in SQLserver 2005 table, Price decimal(18, 4) if I write 12 it will be converted to 12.0000, if I write 12.33 it will be converted into 12.3300. Always it's putting zero to the right of the decimal point in the count of Scale Part(4). I was using these in SQL Server 2000, it was not behaving like this, in SQL Server 2000 if I put 12.5 it will be stored as 12.5 not as 12.5000 what SQLServer2005 do. My Question is how to stop SQL Server 2005 from putting zeros to the right of the decimal point?

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • What kind of website needs SSL?

    - by jpartogi
    Dear all, What kind of website that needs SSL? Is it limited to e-commerce website, or websites that needs credit card payment only? Is there another good reason for a non e-commerce website to use SSL? Thank you for sharing your experience.

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • Dealing with development and large javascript files?

    - by maxp
    When dealing with websites with large amount of javascript, i see that these are still usually served to the client as one large javascript file. In the development phase, are the javascript files usually split up (say there are 300 lines of js) to make things abit more manageable, and then merged when the website is 'put live'? Or do the developers just put up with working in one long large file?

    Read the article

  • Magento set Store Id - customer login - but still logged out

    - by user3564050
    I've got an overridden AccountController in which i set the current store to an other as currently running (example: Customer is in website default and store default, going to login page, click login, my loginPostAction sets the store to id "2" (on website 2) and then executes the parent code loginPostAction. The store is set, of course, but after the login and the redirect to home, the customer is not logged in anymore... Customer-sendlogindata-myaccountcontroller sets store-original account controller logs in without errors (cause $session customer is set)-redirect to home-customer is not logged in anymore... i set the store with Mage::app()-setCurrentStore($id); . And in index.php i've got an extra where the store is set to the right id (2) too and this works... but the customer is not logged in anymore.. is that an issue with the session cause different websites ? I don't want to globally share customer.. each website has his own customers, but every customer has to be able to login on default store. AccountController.php overridden: public $Website_Ids = array( array("code" => "gerstore", "id" => "3", "website" => "ger"), array("code" => "ukstore", "id" => "2", "website" => "uk"), array("code" => "esstore", "id" => "4", "website" => "es"), array("code" => "frstore", "id" => "5", "website" => "fr") ); public function loginPostAction() { $login = $this->getRequest()->get('login'); if(isset($login['username'])) { $found = null; foreach($this->Website_Ids as $WebsiteId) { $customer = Mage::getModel('customer/customer'); $customer->setWebsiteId($WebsiteId['id']); $customer->loadByEmail($login['username']); if(count($customer->getData()) > 0) { $found = $WebsiteId; } } if($found != null && Mage::app()->getStore()->getId() != $found['id']) { /* found, so set currentstore to id */ Mage::app()->setCurrentStore($found['id']); $_SESSION['current_store_b2b'] = $found; } /* not found, doesn't matter cause mage login exception handling */ } parent::loginPostAction(); } Index.php : session_start(); $session = $_SESSION['current_store_b2b']; if($session != null || $session != "") { Mage::app()->setCurrentStore($session['id']); Mage::run($session['code'], 'store'); } else { /* Store or website code */ $mageRunCode = isset($_SERVER['MAGE_RUN_CODE']) ? $_SERVER['MAGE_RUN_CODE'] : ''; /* Run store or run website */ $mageRunType = isset($_SERVER['MAGE_RUN_TYPE']) ? $_SERVER['MAGE_RUN_TYPE'] : 'store'; Mage::run($mageRunCode, $mageRunType); } Whats the matter ? Thanks.

    Read the article

  • Does Seaside scale?

    - by Richard Durr
    Seaside is known as "the heretical web framework". One of the points that make it heretical is that it has much shared state. That however is something which, in my current understanding, hinders easy scaling. Ruby on rails on the other hand shares as less state as possible. It has been known to scale pretty well, even if it is dog slow compared to modern smalltalk vms. flickr uses php and has scaled to an extremly big infrastructure... So has anybody some experience in the scaling of Seaside?

    Read the article

  • Morfik - suitability for medium-scale web enterprise applications

    - by MaikB
    I'm investigating technologies with which to develop a medium-scale (up to 100 or 200 simultaneous users) database-driven web application, and someone suggested Morfik. However, outside of the Morfik company I can find practically zero community support - no active blogs, no tutorials, no videos, no books - and this is of some concern (especially when compared to C# / ASP.NET / nHibernate etc support). Deciding between Morfik (untried and not used widely AFAIK) and the other technologies I mentioned (tried, tested, used widely) is becoming a critical issue for my company. Has anyone had success using Morfik in these kind of circumstances? What kind of performance did you achieve?

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >