Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 196/725 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Gluster strange issue with shared mount point like seprate mount.

    - by Satish
    I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate. node1 10.101.140.10:/nova-gluster-vol 2.0G 820M 1.2G 41% /mnt node2 10.101.140.10:/nova-gluster-vol 2.0G 33M 2.0G 2% /mnt volume info split brian $ sudo gluster volume heal nova-gluster-vol info split-brain Gathering Heal info on volume nova-gluster-vol has been successful Brick 10.101.140.10:/brick1/sdb Number of entries: 0 Brick 10.101.140.20:/brick1/sdb Number of entries: 0 test node1 $ echo "TEST" > /mnt/node1 $ ls -l /mnt/node1 -rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1 node2 (file isn't there, while they are shared mount) $ ls -l /mnt/node1 ls: cannot access /mnt/node1: No such file or directory What i am missing??

    Read the article

  • How to deploy ASP.NET MVC application in a shared hosting without losing the beautiful uri?

    - by ZX12R
    i am trying to upload an asp.net mvc application in a shared server. I don't have access to IIS. What i have done after reading from various sources is change the route method in the global.asax file as follows routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}.aspx/{action}/{id}", new { action = "Index", id = "" } ); routes.MapRoute( "Root", "", new { controller = "Home", action = "Index", id = "" } ); it is working fine. but the problem is i have lost the beautiful uri here. is there a way to remove the ".aspx" behind the controller..? thanks in advance.:)

    Read the article

  • C Programming: malloc() inside another function

    - by vikramtheone
    Hi Guys, I need help with malloc() inside another function. I'm passing a pointer and size to the function from my main() and I would like to allocate memory for that pointer dynamically using malloc() from inside that called function, but what I see is that.... the memory which is getting allocated is for the pointer declared withing my called function and not for the pointer which is inside the main(). How should I pass a pointer to a function and allocate memory for the passed pointer from inside the called function? Can anyone throw light on this? Help!!! Vikram I have written the following code and I get the output as shown below SOURCE: main() { unsigned char *input_image; unsigned int bmp_image_size = 262144; if(alloc_pixels(input_image, bmp_image_size)==NULL) printf("\nPoint2: Memory allocated: %d bytes",_msize(input_image)); else printf("\nPoint3: Memory not allocated"); } signed char alloc_pixels(unsigned char *ptr, unsigned int size) { signed char status = NO_ERROR; ptr = NULL; ptr = (unsigned char*)malloc(size); if(ptr== NULL) { status = ERROR; free(ptr); printf("\nERROR: Memory allocation did not complete successfully!"); } printf("\nPoint1: Memory allocated: %d bytes",_msize(ptr)); return status; } PROGRAM OUTPUT: Point1: Memory allocated ptr: 262144 bytes Point2: Memory allocated input_image: 0 bytes

    Read the article

  • How to create a Appointment in a Shared Calendar (Sharepoint) with VBA (Macro)?

    - by Diogo K.
    I am actually trying to make an appointment from an excel spreadsheet. I have all the information of the appointment, like subject, body, start and end dates, I actually can create an appointment but only with my personal calendar in outlook. How do I copy/move/create an appointment in a shared calendar in a sharepoint server? I've tried: Dim apOL As Object Dim objFolder As Folder Dim cro As String Set apOL = CreateObject("Outlook.Application") Set oItem = apOL.CreateItem(olApItem) Set MAPISession = apOL.Session ... cro = "stssync://sts/?ver=1.0&type=calendar&cmd=add-folder&base-url=(MY SP SERVER)&list-url=%2FLists%2FCronograma%20%20%2Fcalendar%2Easpx&guid=%7B02717CEF%2D404F%2D482F%2DA131%2D5C3C245CD268%7D&site-name=Testes&list-name=Cronograma%20-" ... Set objFolder = MAPISession.OpenSharedFolder(cro, Null,Null,Null) It gives me the error "Type Mismatch" I'd try to get the objFolder as the Sharepoint Folder then later create an local appointment and then try an Item.Move objFolder Is it the correct way?

    Read the article

  • Moving from dedicated to shared cpanel - any scripts to do all / some of the install tasks ?

    - by mbbcat
    Hi, I have a few hundred phpld sites to move - each has its own cpanel, ( & the target may have shared cpanel) & I can do a full cpanel backup on the original server, but I don't have whm on the current host - the backups are fairly easy to organize but the installs so far means picking through files & setting up db's & mail etc by hand - I am thinking there ought to be an easier ie scripted way to do the installs or at least some parts - can anyone please suggest something ? I would like to migrate the stats at the same time Thanks M

    Read the article

  • Could/Should I use static classes in asp.net/c# for shared data?

    - by death.au
    Here's the situation I have: I'm building an online system to be used by school groups. Only one school can log into the system at any one time, and from that school you'll get about 13 users. They then proceed into a educational application in which they have to co-operate to complete tasks, and from a code point of view, sharing variables all over the place. I was thinking, if I set up a static class with static properties that hold the variables that are required to be shared, this could save me having to store/access the variables in/from a database, as long as the static variables are all properly initialized when the application starts and cleaned up at the end. Of course I would also have to put locks on the get and set methods to make the variables thread safe. Something in the back of my mind is telling me this might be a terrible way of going about things, but I'm not sure exactly why, so if people could give me their thoughts for or against using a static class in this situation, I would be quite appreciative.

    Read the article

  • How do I setup a shared session between two users on my Ruby on Rails powered site?

    - by ben
    Hey guys, The website that I'm building includes a section where two users can interact. I think I know how to do most of it, except the actual session sharing part. I'm using Ruby on Rails & Javascript (jquery), and I've got user login and session management all working okay. Would the best way to create a shared session be to have a SharedSession model, with accompanying database table, with participant1ID, participant2ID etc? Is there a better way? Thanks so much for reading!

    Read the article

  • Is it OK to run an array with 22k strings in a PHP code on a shared web host?

    - by kuchikoo
    I'm new to writing code so kindly bear with me if this is a very noobish question. A couple of days back I asked a question about a PHP code that matches the the query entered by users on my website to an array stored within the PHP code and displays the matched queries. Here is the code I'm talking about Now I've ended up with a rather large list (over 22k) of strings that have to be stored in the array. Is it ok to run it like this? I'm hosting the site on a shared hostgator package, will this cause my site to crash? I don't know too much about DBs but can I somehow store this on MySQL instead of having it in the code?

    Read the article

  • NHibernate will insert but not update after move to host with shared server running mysql.

    - by Andy LifeBrixx
    Hi, I have a site running MVC and Nhibernate (not fluent) using standard session per request in an http module, runs fine locally (also with mysql) but after a move to a hosting provider no update statements are being issued. I can insert but not update, no exceptions are raised, I have the 'show_sql' option switched on which locally shows the update statements being issued but on the server no update statements are logged. I don't think NHProf is an option for me as I can only run asp.net apps on my shared server, are there any other methods of diagnosing NH issues like this ? Anyone had a similar issue ? Cheers, A

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Error java.lang.OutOfMemoryError: getNewTla using Oracle EPM products

    - by Marc Schumacher
    Running into a Java out of memory error, it is very common behaviour in the field that the Java heap size will be increased. While this might help to solve a heap space out of memory error, it might not help to fix an out of memory error for the Thread Local Area (TLA). Increasing the available heap space from 1 GB to 16 GB might not even help in this situation. The Thread Local Area (TLA) is part of the Java heap, but as the name already indicates, this memory area is local to a specific thread so there is no need to synchronize with other threads using this memory area. For optimization purposes the TLA size is configurable using the Java command line option “-XXtlasize”. Depending on the JRockit version and the available Java heap, the default values vary. Using Oracle EPM System (mainly 11.1.2.x) the following setting was tested successfully: -XXtlasize:min=8k,preferred=128k More information about the “-XXtlasize” parameter can be found in the JRockit documentation: http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionXX.html

    Read the article

  • Fast Track Data Warehouse 3.0 Reference Guide

    - by jchang
    Microsoft just release Fast Track Data Warehouse 3.0 Reference Guide version. The new changes are increased memory recommendation and the disks per RAID group change from 2-disk RAID 1 to 4-Disk RAID 10. Memory The earlier FTDW reference architecture cited 4GB memory per core. There was no rational behind this, but it was felt some rule was better than no rule. The new FTDW RG correctly cites the rational that more memory helps keep hash join intermediate results and sort operations in memory. 4-Disk...(read more)

    Read the article

  • Can my computer run Ubuntu? [duplicate]

    - by Harry B
    This question already has an answer here: How do I find out which version and derivative of Ubuntu is right for my hardware in terms of minimal system requirements? 2 answers Just want to check if my computer can run Ubuntu. It is an old IBM ThinkPad, so here are the basic stats I could find: IBM 28832ZU Processor Intel(R) Celeron(R) M processor 1300MHz Processor Speed 1.27 GHz Memory (RAM) 2048 MB Operating System Microsoft Windows XP Professional Operating System Version 5.1.2600 Intel Extreme Graphics 2M And here is some info pulled from the graphics panel NTEL(R) EXTREME GRAPHICS 2 FOR MOBILE REPORT Report Date: 06/28/2013 Report Time[hr:mm:ss]: 17:53:20 Driver Version: 6.14.10.3943 Operating System: Windows NT* 5.1.2600 Service Pack 2 Default Language: English DirectX* Version: 9.0 Physical Memory: 2038 MB Min. Graphics Memory: 8 MB Max. Graphics Memory: 64 MB Graphics Memory in use: 7 MB Processor: x86 family 6 Model 9 Stepping 5 Processor Speed: 1296 MHZ Device Revision: 2 Output Devices Connected to Graphics Accelerator * Active Notebook Displays:1

    Read the article

  • command history across multiple PuTTy sessions in SunOS 5.10

    - by foampile
    I have multiple PuTTy sessions open to my SunOS 5.10 server, and I am using ksh, and SOMETIMES the command history is shared among the different sessions and SOMETIMES it is not. I cannot figure out what determines whether it is or is not shared. By shared what I mean is that a command run in one session will be seen as previous command run in another session. I prefer it not to be shared, is there a config setting for that? Thanks

    Read the article

  • Syntax error on running a batch file to replace files

    - by Ralph
    I have a batch file intended to replace all instances of tracking.js within a folder/sub folders. FOR /R "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2\" %%I IN (tracking.js*) DO COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\tracking.js" %%~fI When this is run I get the following syntax error C:COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\track ing.js" D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2 \SHAPERS_COMBINED\Smarter Communications\WhatisInfluencing\script\Tracking.js The syntax of the command is incorrect. Ideas please?

    Read the article

  • What can lead to a zone memory exhaustion and how Nginx reacts to it?

    - by Miles Hughes
    What is a possible scenario for exhausting the memory designated to a connection zone with limit_conn_zone directive and what are the implication in this case? Suppose I have this in my configuration: http { limit_conn_zone $binary_remote_addr zone=connzone:1m; ... server { limit_conn connzone 5; which, according to the documentation, allocates 16000 states for connzone on a 64-bit server. It also says that If the storage for a zone is exhausted, the server will return error 503 (Service Temporarily Unavailable) to all further requests. Well, Ok. But what does it mean on practice? When does this happen? Who receives those 503s? Does it mean that if the number of IPs somehow associated with connzone hits 16000 everyone gets a 503 and it's all over? How does Nginx decide? The documentation is weirdly vague on this. So, considering the example config, who would actually get a 503 and under which circumstances and how would things go from there? Same with request zones?

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • ack misses results (vs. grep)

    - by techpeace
    I'm sure I'm misunderstanding something about ack's file/directory ignore defaults, but perhaps somebody could shed some light on this for me: mbuck$ grep logout -R app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak: <%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> mbuck$ ack logout app/views/ mbuck$ Whereas... mbuck$ ack -u logout app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak 98:<%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> Simply calling ack without options can't find the result within a .bak file, but calling with the --unrestricted option can find the result. As far as I can tell, though, ack does not ignore .bak files by default.

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Killing a process which ran for a lot of time or is using a lot of memory

    - by Vedant Terkar
    I am not sure whether this question belong to Stack Overflow or here, but here we go. I am designing a online 'C' compiler, which will compile and invoke the program if compilation succeeded. So here is code which I am using for that: $str=shell_exec("gcc path/to/file.c -o path/to/file.exe 2>&1"); if(file_exists("path/to/file.exe")){ $res=shell_exec("path/to/file.exe <inputfile 2>&1"); echo $res; } This Seems to work fine with simple program files. But When file.c That is the source code entered contains Infinite loop then This script crashes the server and utilizes a lot of memory and time. So here is my question: Is There any way to detect for how much time does the process file.exe is Running? How Much Space is Utilized by that process that is file.exe? Is There any way to kill the process file.exe if space and time utilization increases beyond certain limit? That Mean if we allocate time of 2.5sec and space of 40Mb at max for that process file.exe and if any one of those 2 constraints is violated then we should display appropriate error message to client Is it possible? I am Using WAMP (Windows 7).

    Read the article

  • Installing Django on Shared Server: No module named MySQLdb?

    - by Mark
    I'm getting this error Traceback (most recent call last): File "/home/<username>/flup/server/fcgi_base.py", line 558, in run File "/home/<username>/flup/server/fcgi_base.py", line 1116, in handler File "/home/<username>/python/django/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/<username>/python/django/django/core/handlers/base.py", line 73, in get_response response = middleware_method(request) File "/home/<username>/python/django/django/contrib/sessions/middleware.py", line 10, in process_request engine = import_module(settings.SESSION_ENGINE) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/contrib/sessions/backends/db.py", line 2, in ? from django.contrib.sessions.models import Session File "/home/<username>/python/django/django/contrib/sessions/models.py", line 4, in ? from django.db import models File "/home/<username>/python/django/django/db/__init__.py", line 41, in ? backend = load_backend(settings.DATABASE_ENGINE) File "/home/<username>/python/django/django/db/__init__.py", line 17, in load_backend return import_module('.base', 'django.db.backends.%s' % backend_name) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/db/backends/mysql/base.py", line 13, in ? raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb when I try to run this script on my shared server #!/usr/bin/python import sys, os sys.path.insert(0, "/home/<username>/python/django") sys.path.insert(0, "/home/<username>/python/django/www") # projects directory os.chdir("/home/<username>/python/django/www/<project>") os.environ['DJANGO_SETTINGS_MODULE'] = "<project>.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") But, my web host just installed MySQLdb for me a few hours ago. When I run python from the shell I can import MySQLdb just fine. Why would this script report that it can't find it?

    Read the article

  • How to solve "java.io.IOException: error=12, Cannot allocate memory" calling Runtime#exec()?

    - by Andrea Francia
    On my system I can't run a simple Java application that start a process. I don't know how to solve. Could you give me some hints how to solve? The program is: [root@newton sisma-acquirer]# cat prova.java import java.io.IOException; public class prova { public static void main(String[] args) throws IOException { Runtime.getRuntime().exec("ls"); } } The result is: [root@newton sisma-acquirer]# javac prova.java && java -cp . prova Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:474) at java.lang.Runtime.exec(Runtime.java:610) at java.lang.Runtime.exec(Runtime.java:448) at java.lang.Runtime.exec(Runtime.java:345) at prova.main(prova.java:6) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:467) ... 4 more Configuration of the system: [root@newton sisma-acquirer]# java -version java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386) OpenJDK Client VM (build 14.0-b15, mixed mode) [root@newton sisma-acquirer]# cat /etc/fedora-release Fedora release 10 (Cambridge) EDIT: Solution This solves my problem, I don't know exactly why: echo 0 /proc/sys/vm/overcommit_memory Up-votes for who is able to explain :) Additional informations, top output: top - 13:35:38 up 40 min, 2 users, load average: 0.43, 0.19, 0.12 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 1.5%us, 0.5%sy, 0.0%ni, 94.8%id, 3.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1033456k total, 587672k used, 445784k free, 51672k buffers Swap: 2031608k total, 0k used, 2031608k free, 188108k cached Additional informations, free output: [root@newton sisma-acquirer]# free total used free shared buffers cached Mem: 1033456 588548 444908 0 51704 188292 -/+ buffers/cache: 348552 684904 Swap: 2031608 0 2031608

    Read the article

  • What is the correct way to synchronize a shared, static object in Java?

    - by johnrock
    This is a question concerning what is the proper way to synchronize a shared object in java. One caveat is that the object that I want to share must be accessed from static methods. My question is, If I synchronize on a static field, does that lock the class the field belongs to similar to the way a synchronized static method would? Or, will this only lock the field itself? In my specific example I am asking: Will calling PayloadService.getPayload() or PayloadService.setPayload() lock PayloadService.payload? Or will it lock the entire PayloadService class? public class PayloadService extends Service { private static PayloadDTO payload = new PayloadDTO(); public static void setPayload(PayloadDTO payload){ synchronized(PayloadService.payload){ PayloadService.payload = payload; } } public static PayloadDTO getPayload() { synchronized(PayloadService.payload){ return PayloadService.payload ; } } ... Is this a correct/acceptable approach ? In my example the PayloadService is a separate thread, updating the payload object at regular intervals - other threads need to call PayloadService.getPayload() at random intervals to get the latest data and I need to make sure that they don't lock the PayloadService from carrying out its timer task

    Read the article

  • Why my UTableView with style UITableViewStyleGrouped is consuming memory?

    - by prathumca
    Hello everyone, Currently in my app, I'm using an UITableView with style UITableViewStyleGrouped as shown below. CGRect imgFrame = CGRectMake(0, 0, 320, 650); UITableView *myTable = [[UITableView alloc] initWithFrame:imgFrame style:UITableViewStyleGrouped]; myTable.dataSource = self; myTable.delegate = self; //make the current object the event handler for view [self.view addSubview:myTable]; [myTable release]; And the data has stored in an array "dataArray". dataArray has collection of arrays, where each array represent a section. Currently I have only one section with 100 records. When I installed my app onto my IPhone, I observed that this UITableView is consuming 20 MB of IPhone memory. If I changed the table view style to "UITableViewStylePlain", then it is consuming only 4MB of memory. I'm trying to figure it out, where is the exact problem, but not. What was wrong with "UITableViewStyleGrouped"? Regards, prathumca.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >