Search Results

Search found 7118 results on 285 pages for 'sam ram san'.

Page 232/285 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • C# Help with a basic pedagogic example of a BackGroundWorker process populating a DataGridView

    - by Roger
    Scenario: I have a windows form that holds a DataGridWiew with 3 pre-defined columns. I have 3 variables declared outside the function and assigned to inside the function. I have a function that enumerates stuff and puts it in the 3 columns, line by line: string VARIABLE1; string VARIABLE2; string VARIABLE3; private void FunctionEnumerateStuff() { foreach (StuffObject STUFF in StuffCollection) { VARIABLE1 = STUFF.SubStuff1.ToString(); VARIABLE2 = STUFF.SubStuff2.ToString(); VARIABLE3 = STUFF.SubStuff3.ToString(); DatagridWiew1.Rows.Add(VALUE1, VALUE2, VALUE3); } } What I want to do, is to execute this function from a BackGroundWorker process, so that the GUI of the application will be smooth and responsive. I have read up on backgroundworkers but I am having trouble relating, because all examples seems to be of entirely different scenarios and most of them are overwelmingly complex. San some helpful pedagogic soul help me and others with a very basic example of how to get this to work in the simplest way possible. Thanks.

    Read the article

  • Java NIO SocketChannel writing problem

    - by Nilesh
    I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried. The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. What could be the cause of such behaviour? Could external factors such as RAM, etc cause the hindarance? Please help me solve this issue. If any other information is required please let me know. Thanks

    Read the article

  • Sharepoint Foundation 2010 installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • Very slow remote page load performance in QtWebKit (Windows)

    - by Gil
    I get very slow load times on QtWebKit on Windows 7, through ADSL. I am using the Qt Demo Browser, on a Core2 Quad, 64 bit Windows 7, 4GB ram, 2gb processor. through a VPN. Simplest example: google search page takes ~18 seconds to load, compared to 2.5 on Chrome (cash cleared). On larger pages, with scripts etc. it is worse. I tried Qt 4.6 and also the Qt 4.7 beta, but don't see any difference. I see the same results with Arora browser. Are there any settings, or patches that can be applied to fix this? Thanks

    Read the article

  • Out of Memory on Update or Delete of Service Reference

    - by Kelly
    I have a Service Reference for a WCF project that has just over a hundred endpoints in my ServiceHost web.config. Every time I attempt to update or delete the Service Reference, it fails with an out of memory exception. I am running Vista Ultimate SP2 64-bit with 8GB RAM. I can work around it by going outside the project and deleting the Service References folder, then coming back in and re-adding the Reference. Is this the only workaround that you know of? Thanks!

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • Hadoop on windows server

    - by Luca Martinetti
    Hello, I'm thinking about using hadoop to process large text files on my existing windows 2003 servers (about 10 quad core machines with 16gb of RAM) The questions are: Is there any good tutorial on how to configure an hadoop cluster on windows? What are the requirements? java + cygwin + sshd ? Anything else? HDFS, does it play nice on windows? I'd like to use hadoop in streaming mode. Any advice, tool or trick to develop my own mapper / reducers in c#? What do you use for submitting and monitoring the jobs? Thanks

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Using ManagementObject to retrieve a single WMI property

    - by Jesse
    This probably isn't the best way, but I am currently retrieving the amount of RAM on a machine using: manageObjSearch.Query = new ObjectQuery("SELECT TotalVisibleMemorySize FROM Win32_OperatingSystem"); manageObjCol = manageObjSearch.Get(); foreach (ManagementObject mo in manageObjCol) sizeInKilobytes = Convert.ToInt64(mo["TotalVisibleMemorySize"]); It works well and good, but I feel I could be doing this more directly and without a foreach over a single element, but I can't figure out how to index a ManagementObjectCollection I want to do something like this: ManagementObject mo = new ManagementObject("Win32_OperatingSystem.TotalVisibleMemorySize") mo.Get(); Console.WriteLine(mo["TotalVisibleMemorySize"].ToString()) or maybe even something like ManagementClass mc = new ManagementClass("Win32_OperatingSystem"); Console.WriteLine(mc.GetPropertyValue("TotalVisibleMemorySize").ToString()); I just can't seem to figure it out. Any ideas?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Out of memory error while using clusterdata in MATLAB

    - by Hossein
    Hi, I am trying to cluster a Matrix (size: 20057x2).: T = clusterdata(X,cutoff); but I get this error: ??? Error using == pdistmex Out of memory. Type HELP MEMORY for your options. Error in == pdist at 211 Y = pdistmex(X',dist,additionalArg); Error in == linkage at 139 Z = linkagemex(Y,method,pdistArg); Error in == clusterdata at 88 Z = linkage(X,linkageargs{1},pdistargs); Error in == kmeansTest at 2 T = clusterdata(X,1); can someone help me. I have 4GB of ram, but think that the problem is from somewhere else..

    Read the article

  • MySQL - How To Avoid Repair With Keycache?

    - by dvancouver
    I have had some experience with optimizing the my.cnf file but my database has around 4 million records (MyISAM). I am trying to restore from a mysqldump but every time I do I eventually get the dreaded "Repair With Keycache", that may take days. Is there anyway to get past this and let it roll as "Repair By Sorting"? I have 2GB RAM, Dual Cores, lots of extra hard-drive space. Snip out of my.cnf: set-variable = max_connections=650 set-variable = key_buffer=256M set-variable = myisam_sort_buffer_size=64M set-variable = join_buffer=1M set-variable = record_buffer=1M set-variable = sort_buffer_size=2M set-variable = read_buffer_size=2M set-variable = query_cache_size=32M set-variable = table_cache=1024 set-variable = thread_cache_size=256 set-variable = wait_timeout=7200 set-variable = connect_timeout=10 set-variable = max_allowed_packet=16M set-variable = max_connect_errors=10 set-variable = thread_concurrency=8

    Read the article

  • Writing file from HttpWebRequest periodically vs. after download finishes?

    - by WB3000
    Right now I am using this code to download files (with a Range header). Most of the files are large, and it is running 99% of CPU currently as the file downloads. Is there any way that the file can be written periodically so that it does not remain in RAM constantly? private byte[] GetWebPageContent(string url, long start, long finish) { byte[] result = new byte[finish]; HttpWebRequest request; request = WebRequest.Create(url) as HttpWebRequest; //request.Headers.Add("Range", "bytes=" + start + "-" + finish); request.AddRange((int)start, (int)finish); using (WebResponse response = request.GetResponse()) { return ReadFully(response.GetResponseStream()); } } public static byte[] ReadFully(Stream stream) { byte[] buffer = new byte[32768]; using (MemoryStream ms = new MemoryStream()) { while (true) { int read = stream.Read(buffer, 0, buffer.Length); if (read <= 0) return ms.ToArray(); ms.Write(buffer, 0, read); } } }

    Read the article

  • linux display drivers

    - by salman
    I've run into a major display problem on newly installed fedora 11, on my 6 years old pc which runs a pentium4 2.4 GHz processor, 1 gb ddr ram, intel 845 motherboard with integrated graphics card. When i open an image or play a video, my complete screen turns garbled. I simply cannot make out whats on my screen. With difficulty i have to close the image/video window and move around the folder window to clean the screen image. Is it because of my display drivers? How can i fix it? I also ran into mp3 plugins and flash issues which i was able to resovle. I'm new to linux, the sole purpose of isntalling it on my old pc was to learn linux but this display problem is frustrating me. Thanks, Salman

    Read the article

  • Web App Server hardware question. Which configuration?

    - by JBeckton
    I am pricing some new servers and I am not sure which configuration to get. The server will be running some web applications for our company. Some of them are ASP.Net sites and some are ColdFusion. The OS will be Win Server 2008 Web or Standard Edition. Do I need 2 processors or will a single quad core handle it? Xeon multi core Hyperthreading or non Hyperthreading? I am going 64bit so I can go higher than 4 Gigs of Ram. I am shopping at Dell and there are so many options, i do not want to get too much hardware and not use half of it because that would be a waste of money and I do not want to get too little and have to ask for more money to upgrade it later.

    Read the article

  • Memory in Eclipse

    - by user247866
    I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse by default uses heap size of 256M. I'm trying to increase it but nothing happens. For example: eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g I also tried different settings, using only the -Xmx option, using different cases of g, G, m, M, different memory sizes, but nothing helps. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as: double[][] a = new double[15000][15000]; It only works when I reduce the array size to something around 10000 on 10000. I'm working on Linux and using the top command I can see how much memory the Java process is consuming; it's less than 2%. Thanks!

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • rundll32.exe constantly running taking up resources slowing down my Win 7 computer

    - by Joe Fletcher
    Over the past week, my Windows 7 Home Premium computer (8gb RAM, 64bit) has been running slowly. When I look at my processes, there are always 2 rundll32.exe's running taking up 3 & 25% CPU power, memory slowly creeping upwards from around 115mb to 160mb each in the time it has taken me to right this message, sometimes popping upt o 300mb and back down. Svchost.exe is at 260mb. When I end those processes, everything returns to snappiness. I recently did some Windows Updates, and I think it was around the time my computer started acting slowly, but I can't remember if it was before or after the updates that things started running slowly. Last night I ccleaned & defrag'ed. How can I diagnose what's causing the slowness?

    Read the article

  • What is the typical setup for a laptop used for multi platform development?

    - by iama
    I am planning to build a new laptop for development for both Windows & Linux platforms. On Windows, my development would be primarily on .NET/C#/IIS/MSSQL Server. On Linux, preferably Ubuntu, my development would be on Ruby and Python. I am thinking of buying a laptop with Windows 7 pre-installed with 4GB RAM/Intel Core 2 Duo/320 GB HD & then thinking of running 2 VMs for both Windows and Linux development with the host OS as my work station. Of course, I would be running DBs and web servers on the respective platforms. Is this a typical setup? My only concern is running two VMs side by side. Not sure if this configuration would be optimal. Alternative would be to do my Windows development on the host Windows 7 OS. Any thoughts?

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • How do I mock memory allocation failures ?

    - by Andrei Ciobanu
    I want to extensively test some pieces of C code for memory leaks. On my machine I have 4 Gb of RAM, so it's very unlikely for a dynamic memory allocation to fail. Still I want to see the comportment of the code if memory allocation fails, and see if the recover mechanism is "strong" enough. What do you suggest ? How do I emulate an environment with lower memory specs ? How do i mock my tests ? EDIT: I want my tests to be code independent. I only have "access" to return values for different functions in the library I am testing. I am not supposed to write "test logic" inside the code I am testing.

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >