Search Results

Search found 4813 results on 193 pages for 'ram shankar yadav'.

Page 144/193 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Why does my server hang when I call a page over files_get_content?

    - by Marc
    I am trying to get content from a wordpress installation on a subdomain of my server. I tried that with file_get_content and also with Zend_Http_Client. $client = new Zend_Http_Client(Zend_Registry::get('CONFIG')->static->$name->$lang); $content = $client->request()->getBody(); As long as I run in on my localhost, it works fine. As soon as it runs on the same server as the subdomain, it hangs forever (timeout). Specs: Zend Framework Application trying to get HTML from a Wordpress Page Server running on lighttpd Several cores, much ram Do you guys have an idea on how this problem can be resolved? Cheerio

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • ACT Professional for Windows-Memory leak?

    - by Dan
    I have an ACT! professional for Windows V11.1, with the latest SQL service pack (SP3) and have an apparent memory leak on the server. After a restart the ACT! SQL instance (SQLSERVR) consumes almost all the available memory on the server, we have added more memory to the server (it is running under Hyper-V) but it continues to consume it all. I have not been able to connect to the SQL server instance using management studio in order to limit the amount of RAM it is allocated. Are there any potential solutions for this? or should I continue to restart the services?

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • Java NIO SocketChannel writing problem

    - by Nilesh
    I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried. The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. What could be the cause of such behaviour? Could external factors such as RAM, etc cause the hindarance? Please help me solve this issue. If any other information is required please let me know. Thanks

    Read the article

  • Sharepoint Foundation 2010 installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • Very slow remote page load performance in QtWebKit (Windows)

    - by Gil
    I get very slow load times on QtWebKit on Windows 7, through ADSL. I am using the Qt Demo Browser, on a Core2 Quad, 64 bit Windows 7, 4GB ram, 2gb processor. through a VPN. Simplest example: google search page takes ~18 seconds to load, compared to 2.5 on Chrome (cash cleared). On larger pages, with scripts etc. it is worse. I tried Qt 4.6 and also the Qt 4.7 beta, but don't see any difference. I see the same results with Arora browser. Are there any settings, or patches that can be applied to fix this? Thanks

    Read the article

  • Out of Memory on Update or Delete of Service Reference

    - by Kelly
    I have a Service Reference for a WCF project that has just over a hundred endpoints in my ServiceHost web.config. Every time I attempt to update or delete the Service Reference, it fails with an out of memory exception. I am running Vista Ultimate SP2 64-bit with 8GB RAM. I can work around it by going outside the project and deleting the Service References folder, then coming back in and re-adding the Reference. Is this the only workaround that you know of? Thanks!

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • Hadoop on windows server

    - by Luca Martinetti
    Hello, I'm thinking about using hadoop to process large text files on my existing windows 2003 servers (about 10 quad core machines with 16gb of RAM) The questions are: Is there any good tutorial on how to configure an hadoop cluster on windows? What are the requirements? java + cygwin + sshd ? Anything else? HDFS, does it play nice on windows? I'd like to use hadoop in streaming mode. Any advice, tool or trick to develop my own mapper / reducers in c#? What do you use for submitting and monitoring the jobs? Thanks

    Read the article

  • Using ManagementObject to retrieve a single WMI property

    - by Jesse
    This probably isn't the best way, but I am currently retrieving the amount of RAM on a machine using: manageObjSearch.Query = new ObjectQuery("SELECT TotalVisibleMemorySize FROM Win32_OperatingSystem"); manageObjCol = manageObjSearch.Get(); foreach (ManagementObject mo in manageObjCol) sizeInKilobytes = Convert.ToInt64(mo["TotalVisibleMemorySize"]); It works well and good, but I feel I could be doing this more directly and without a foreach over a single element, but I can't figure out how to index a ManagementObjectCollection I want to do something like this: ManagementObject mo = new ManagementObject("Win32_OperatingSystem.TotalVisibleMemorySize") mo.Get(); Console.WriteLine(mo["TotalVisibleMemorySize"].ToString()) or maybe even something like ManagementClass mc = new ManagementClass("Win32_OperatingSystem"); Console.WriteLine(mc.GetPropertyValue("TotalVisibleMemorySize").ToString()); I just can't seem to figure it out. Any ideas?

    Read the article

  • Writing file from HttpWebRequest periodically vs. after download finishes?

    - by WB3000
    Right now I am using this code to download files (with a Range header). Most of the files are large, and it is running 99% of CPU currently as the file downloads. Is there any way that the file can be written periodically so that it does not remain in RAM constantly? private byte[] GetWebPageContent(string url, long start, long finish) { byte[] result = new byte[finish]; HttpWebRequest request; request = WebRequest.Create(url) as HttpWebRequest; //request.Headers.Add("Range", "bytes=" + start + "-" + finish); request.AddRange((int)start, (int)finish); using (WebResponse response = request.GetResponse()) { return ReadFully(response.GetResponseStream()); } } public static byte[] ReadFully(Stream stream) { byte[] buffer = new byte[32768]; using (MemoryStream ms = new MemoryStream()) { while (true) { int read = stream.Read(buffer, 0, buffer.Length); if (read <= 0) return ms.ToArray(); ms.Write(buffer, 0, read); } } }

    Read the article

  • Out of memory error while using clusterdata in MATLAB

    - by Hossein
    Hi, I am trying to cluster a Matrix (size: 20057x2).: T = clusterdata(X,cutoff); but I get this error: ??? Error using == pdistmex Out of memory. Type HELP MEMORY for your options. Error in == pdist at 211 Y = pdistmex(X',dist,additionalArg); Error in == linkage at 139 Z = linkagemex(Y,method,pdistArg); Error in == clusterdata at 88 Z = linkage(X,linkageargs{1},pdistargs); Error in == kmeansTest at 2 T = clusterdata(X,1); can someone help me. I have 4GB of ram, but think that the problem is from somewhere else..

    Read the article

  • MySQL - How To Avoid Repair With Keycache?

    - by dvancouver
    I have had some experience with optimizing the my.cnf file but my database has around 4 million records (MyISAM). I am trying to restore from a mysqldump but every time I do I eventually get the dreaded "Repair With Keycache", that may take days. Is there anyway to get past this and let it roll as "Repair By Sorting"? I have 2GB RAM, Dual Cores, lots of extra hard-drive space. Snip out of my.cnf: set-variable = max_connections=650 set-variable = key_buffer=256M set-variable = myisam_sort_buffer_size=64M set-variable = join_buffer=1M set-variable = record_buffer=1M set-variable = sort_buffer_size=2M set-variable = read_buffer_size=2M set-variable = query_cache_size=32M set-variable = table_cache=1024 set-variable = thread_cache_size=256 set-variable = wait_timeout=7200 set-variable = connect_timeout=10 set-variable = max_allowed_packet=16M set-variable = max_connect_errors=10 set-variable = thread_concurrency=8

    Read the article

  • linux display drivers

    - by salman
    I've run into a major display problem on newly installed fedora 11, on my 6 years old pc which runs a pentium4 2.4 GHz processor, 1 gb ddr ram, intel 845 motherboard with integrated graphics card. When i open an image or play a video, my complete screen turns garbled. I simply cannot make out whats on my screen. With difficulty i have to close the image/video window and move around the folder window to clean the screen image. Is it because of my display drivers? How can i fix it? I also ran into mp3 plugins and flash issues which i was able to resovle. I'm new to linux, the sole purpose of isntalling it on my old pc was to learn linux but this display problem is frustrating me. Thanks, Salman

    Read the article

  • Web App Server hardware question. Which configuration?

    - by JBeckton
    I am pricing some new servers and I am not sure which configuration to get. The server will be running some web applications for our company. Some of them are ASP.Net sites and some are ColdFusion. The OS will be Win Server 2008 Web or Standard Edition. Do I need 2 processors or will a single quad core handle it? Xeon multi core Hyperthreading or non Hyperthreading? I am going 64bit so I can go higher than 4 Gigs of Ram. I am shopping at Dell and there are so many options, i do not want to get too much hardware and not use half of it because that would be a waste of money and I do not want to get too little and have to ask for more money to upgrade it later.

    Read the article

  • Memory in Eclipse

    - by user247866
    I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse by default uses heap size of 256M. I'm trying to increase it but nothing happens. For example: eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g I also tried different settings, using only the -Xmx option, using different cases of g, G, m, M, different memory sizes, but nothing helps. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as: double[][] a = new double[15000][15000]; It only works when I reduce the array size to something around 10000 on 10000. I'm working on Linux and using the top command I can see how much memory the Java process is consuming; it's less than 2%. Thanks!

    Read the article

  • What is the typical setup for a laptop used for multi platform development?

    - by iama
    I am planning to build a new laptop for development for both Windows & Linux platforms. On Windows, my development would be primarily on .NET/C#/IIS/MSSQL Server. On Linux, preferably Ubuntu, my development would be on Ruby and Python. I am thinking of buying a laptop with Windows 7 pre-installed with 4GB RAM/Intel Core 2 Duo/320 GB HD & then thinking of running 2 VMs for both Windows and Linux development with the host OS as my work station. Of course, I would be running DBs and web servers on the respective platforms. Is this a typical setup? My only concern is running two VMs side by side. Not sure if this configuration would be optimal. Alternative would be to do my Windows development on the host Windows 7 OS. Any thoughts?

    Read the article

  • rundll32.exe constantly running taking up resources slowing down my Win 7 computer

    - by Joe Fletcher
    Over the past week, my Windows 7 Home Premium computer (8gb RAM, 64bit) has been running slowly. When I look at my processes, there are always 2 rundll32.exe's running taking up 3 & 25% CPU power, memory slowly creeping upwards from around 115mb to 160mb each in the time it has taken me to right this message, sometimes popping upt o 300mb and back down. Svchost.exe is at 260mb. When I end those processes, everything returns to snappiness. I recently did some Windows Updates, and I think it was around the time my computer started acting slowly, but I can't remember if it was before or after the updates that things started running slowly. Last night I ccleaned & defrag'ed. How can I diagnose what's causing the slowness?

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • How to improve IntelliJ code editor speed?

    - by Hoàng Long
    I am using IntelliJ (Community Edition) for several months, and at first I'm pleased about its speed & simplicity. But now, after upgrading to version 10, it's extremely slow. Sometimes I click a file then it takes 5 - 15 seconds to open that file (it freeze for that time). I don't know if I have done anything which cause that: I have installed 2 plugins(regex, sql), and have 2 versions of IntelliJ on my machine (now the version 9 removed, only version 10 remains). Is there any tips to improve speed of code editor, in general, or specifically IntelliJ? I have some experience when using IntelliJ: 1. Should open IntelliJ a while before working, cause it needs time for indexing. Don't open too many code tabs Open as less other program as possible. I'm using 2 GB RAM WinXP, and it just seems fairly enough for Java, IntelliJ & Chrome at the same time.

    Read the article

  • How do I mock memory allocation failures ?

    - by Andrei Ciobanu
    I want to extensively test some pieces of C code for memory leaks. On my machine I have 4 Gb of RAM, so it's very unlikely for a dynamic memory allocation to fail. Still I want to see the comportment of the code if memory allocation fails, and see if the recover mechanism is "strong" enough. What do you suggest ? How do I emulate an environment with lower memory specs ? How do i mock my tests ? EDIT: I want my tests to be code independent. I only have "access" to return values for different functions in the library I am testing. I am not supposed to write "test logic" inside the code I am testing.

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >