Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 30/422 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Software licensing template that gives room for restricting usage to certain industries/uses of software/source

    - by BSara
    *Why this question is not a duplicate of the questions specified as such: I did not ask if there was a license that restricted specific uses and I did not ask if I could rewrite every line of any open source project. I asked very specifically: "Does there exist X? If not, can I Y with Z?". As far as I can tell, the two questions that were specified as duplicates do not answer my specific question. Please remove the duplicate status placed on the question. I'm developing some software that I would like to be "semi" open source. I would like to allow for anyone to use my software/source unless they are using the software/source for certain purposes. For example, I don't want to allow usage of the software/source if it is being used to create, distribute, view or otherwise support pornography, illegal purposes, etc. I'm no lawyer and couldn't ever hope to write a license myself nor do I have to time to figure how to best do this. My question is this: Does there exist a freely available license or a template for a license that I can use to license my software under they conditions explained above just like one can use the Creative Commons licenses? If not, am I allowed to just alter one of Creative Commons licenses to meet my needs?

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Best Usage of Multiple Computers For a Developer

    - by whaley
    I have two Macbook Pros - both are comparable in hardware. One is a 17" and the other a 15". The 17" has a slightly swifter CPU clock speed, but beyond that the differences are completely negligible. I tried a setup a while back where I had the 17" hooked up to an external monitor in the middle of my desk with the 15" laptop immediately to the right of it, and was using teleport to control the 15" from my 17". All development, terminal usage, etc. etc. was being done on the 17" and the 15" was primarily used for email / IM / IRC... or anything secondary to what I was working on. I have a MobileMe account so preferences were synced, but otherwise I didn't really use anything else to keep the computers in sync (I use dropbox/git but probably not optimally). For reasons I can't put my finger on, this setup never felt quite right. A few things that irked me was the 15" was way under-utlized and the 17" was overutilized having 2 laptops and a 21" monitor all on one desk actually took up lots of desk space and it felt like I had too much to look at. I reverted back to just using the 17" and the external monitor and keeping the 15" around the house (and using it very sparingly). For those of you who are using multiple laptops (or just multiple machines for that matter), I'd like to see setups that work for you for when you have 2 or more machines that gives you optimal productivity and why. I'd like to give this one more shot but with a different approach than my previous - which was using the 15" as a machine for secondary things (communication, reading documentation, etc. etc).

    Read the article

  • Identify high CPU consumed thread for Java app

    - by Vincent Ma
    Following java code to emulate busy and Idle thread and start it. import java.util.concurrent.*;import java.lang.*; public class ThreadTest {    public static void main(String[] args) {        new Thread(new Idle(), "Idle").start();        new Thread(new Busy(), "Busy").start();    }}class Idle implements Runnable {    @Override    public void run() {        try {            TimeUnit.HOURS.sleep(1);        } catch (InterruptedException e) {        }    }}class Busy implements Runnable {    @Override    public void run() {        while(true) {            "Test".matches("T.*");        }    }} Using Processor Explorer to get this busy java processor and get Thread id it cost lots of CPU see the following screenshot: Cover to 4044 to Hexadecimal is oxfcc. Using VistulVM to dump thread and get that thread. see the following screenshot In Linux you can use  top -H to get Thread information. That it! Any question let me know. Thanks

    Read the article

  • Where is my ram?

    - by gsedej
    I have 2GB installed on my machine running Ubuntu 12.04. After some time of use, I see much of my RAM used. The RAM does not free enough even though I closed all my programs. I included 2 screenshots. First is "Gnome system monitor" (all process) and second is "htop" (with sudo), both sorted by memory usage. From both you see, that it's not possible that all running apps takes together 1GB of memory. First 7 biggest programs sum 250, but others are much smaller (all can't sum even 100MB). The cache is 300MB (yellow ||| on htop) and is not included in 1GB used. Also 260MB is already on swap. (which actually makes 1,3GB of used memory) If i start Firefox (or Chrome) with many tabs, it has only 1GB available and not potentially 1,5 GB (assume 0,5GB is for system). When I need more ram, swapping happens. So where is my ram? Which program is using it? How can i free it, to be available for e.g. Firefox? Gnome system monitor htop

    Read the article

  • Why is my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Updating to linux-image-3.2.0-26-generic occupies all disk space

    - by user42228
    I just ran a normal update through the update manager that wanted to update me to linux-image-3.2.0-26-generic. The installation paused halfway with a message that the root was full. I checked the disk usage, and noticed that one of the newly installed files: /lib/modules/3.2.0-26-generic/modules.ccwmap took up 63GB(!). Compared to the previous version: /lib/modules/3.2.0-24-generic/modules.ccwmap that took up only 4KB it seems like something went awfully wrong! As mentioned the update was paused when i ran out of disk space. There is no option in the update dialog to cancel or perform rollback on the update. Is it asking for trouble to kill the update manager? Any ideas as to what went wrong, and what I can do to remedy it? It's only minimal how much space I can free on that partition (without deleting the above mentioned file). If it is any help, the update was paused after this: Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-26-generic /boot/vmlinuz-3.2.0-26-generic

    Read the article

  • DMV {dm_os_ring_buffers} - Queries to help pinpoint current Issues / usual usage patterns

    - by NeilHambly
    I'm been running some queries (below) to help me identify when I have had time-sensitive performance issues around Memory/CPU, I didn't want to load up additional overhead to the system (unless absolutely neccessary) using traces or profiler  - naturally we have various methods to do this Perfmon counters, DBCC, DMVs etc.. One quick way I like is to run a few DMV queries (normally back in seconds) to help me find those RECENT specific time periods when the system has been substantially changed in some way using, this is using the DMV dm_os_ring_buffers This one helps me identify when I'm expericing Timeout Errors (1222).. modiy code to look for other error as highlight belowDECLARE @ts_now BIGINT,@dt_max BIGINT, @dt_min BIGINT  SELECT @ts_now = cpu_ticks / CONVERT(FLOAT, cpu_ticks_in_ms) FROM sys.dm_os_sys_info SELECT @dt_max = MAX(timestamp), @dt_min = MIN(timestamp)    FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'  SELECT       record_id      ,DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime      ,y.Error      ,UserDefined      ,b.description as NormalizedText FROM       (       SELECT       record.value('(./Record/@id)[1]', 'int')                    AS record_id,       record.value('(./Record/Exception/Error)[1]', 'int')        AS Error,       record.value('(./Record/Exception/UserDefined)[1]', 'int')  AS UserDefined,      TIMESTAMP       FROM             (             SELECT TIMESTAMP, CONVERT(XML, record) AS record             FROM sys.dm_os_ring_buffers             WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'             AND record LIKE '% %'            ) AS x      ) AS y INNER JOIN sys.sysmessages b on y.Error = b.error WHERE b.msglangid = 1033 and  y.Error = 1222 ORDER BY record_id DESC Sample Output record_id EventTime Error UserDefined NormalizedText 15199195 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199194 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199193 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199192 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199191 18/03/2010 14:00 1222 0 Lock request time out period exceeded.  This one helps me identify when I have Unusally High Processing (> 50%) or # Page-FaultsSELECT record.value('(./Record/@id)[1]', 'int') AS record_id,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')              AS SystemIdle,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')      AS SQLProcessUtilization,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')         AS UserModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')       AS KernelModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'bigint')           AS PageFaults,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]', 'bigint')      AS WorkingSetDelta,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int')       AS MemoryUtilization,TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'        AND record LIKE '% %'         ) AS x Example: Showing entries > 50% SQL CPU record_id SystemIdle SQLProcessUtilization UserModeTime KernelModeTime PageFaults WorkingSetDelta MemoryUtilization TIMESTAMP 111916 66 29 36718750 1374843750 21333 -40960 100 7991061289 111917 54 41 50156250 1954062500 26914 -28672 100 7991121290 111918 57 39 42968750 1838437500 30096 20480 100 7991181290 111919 41 53 43906250 2530156250 22088 -4096 100 7991241307 111920 48 45 40937500 2124062500 26395 8192 100 7991301310 111921 52 43 35625000 2052812500 21996 155648 100 7991361311 111922 40 55 36875000 2637343750 33355 -262144 100 7991421311 111923 36 58 44843750 2786562500 47019 28672 100 7991481311 111924 31 64 53437500 3046562500 31027 61440 100 7991541314 111925 36 57 43906250 2711250000 37074 -8192 100 7991601317 111926 52 43 43437500 2060156250 29176 20480 100 7991661318 111927 71 24 33750000 1141250000 14478 16384 100 7991721320 111928 71 23 34531250 1116250000 12711 -20480 100 7991781320 111929 53 36 46562500 1714062500 26684 200704 100 7991841323 Finally one to provide some understanding of the level of memory state changes that are ocuringSELECT record.value('(./Record/@id)[1]', 'int')                                                       AS 'record_id',record.value('(./Record/ResourceMonitor/Notification)[1]', 'VARCHAR(100)')                     AS 'ReservedMemory',record.value('(./Record/ResourceMonitor/Indicators)[1]', 'int')                                AS 'Indicators',record.value('(./Record/ResourceMonitor/Effect/@state)[1]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[1]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[1]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[2]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[2]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[2]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[3]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[3]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[3]', 'VARCHAR(100)')                           AS 'REVERT_HIGHPM',record.value('(./Record/MemoryNode/ReservedMemory)[1]', 'int')                                 AS 'ReservedMemory',record.value('(./Record/MemoryNode/CommittedMemory)[1]', 'int')                                AS 'CommittedMemory',record.value('(./Record/MemoryNode/SharedMemory)[1]', 'int')                                   AS 'SharedMemory',record.value('(./Record/MemoryNode/AWEMemory)[1]', 'int')                                      AS 'AWEMemory',record.value('(./Record/MemoryNode/SinglePagesMemory)[1]', 'int')                              AS 'SinglePagesMemory',record.value('(./Record/MemoryNode/CachedMemory)[1]', 'int')                                   AS 'CachedMemory',record.value('(./Record/MemoryRecord/MemoryUtilization)[1]', 'int')                            AS 'MemoryUtilization',record.value('(./Record/MemoryRecord/TotalPhysicalMemory)[1]', 'int')                          AS 'TotalPhysicalMemory',record.value('(./Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'int')                      AS 'AvailablePhysicalMemory',record.value('(./Record/MemoryRecord/TotalPageFile)[1]', 'int')                                AS 'TotalPageFile',record.value('(./Record/MemoryRecord/AvailablePageFile)[1]', 'int')                            AS 'AvailablePageFile',record.value('(./Record/MemoryRecord/TotalVirtualAddressSpace)[1]', 'bigint')                  AS 'TotalVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint')              AS 'AvailableVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableExtendedVirtualAddressSpace)[1]', 'bigint')      AS 'AvailableExtendedVirtualAddressSpace', TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_RESOURCE_MONITOR'        AND record LIKE '% %'        ) AS x  

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • Fan always on and overheating problems on HP G62

    - by Dave
    I have a HP G62 with i5 processor, 4G Ram and ATI HD 5470. Started using Ubuntu 4 months ago, first was 11.10 and after 12.04. Everything worked perfectly until some weeks ago when my cpu fan became "crazy" and I mean by this always on, noise and overheating problems, for sure if I put a glass with water in some minutes will boil (and I mean it). There can be a bug in the last kernel or any other Ubuntu's official update? And by the way CPU fan is cleaned like a new born baby; Used jupiter and all those stuff (lm-sensor, psensor...) - same problem; Don't ask me to install ATI drivers because will work like sh#t (interface laggy as HD 5470 isn't supported by Ubuntu) and I can't even open ATI Catalyst (error when I try to open it), more than this I always used open source drivers and as I said, worked perfectly (3d, effects). Right now I'm back to Windows and everything is working perfectly but my main problem is that I get used with Ubuntu and kinda weird to use windows again. I want back my nice and almost perfect Ubuntu. Thanks, Dave

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • Can certain system-hungry modules be disabled in Ubuntu?

    - by Ole Thomsen Buus
    Hi, Let me add some context: I am currently using Ubuntu 9.10 64-bit (Desktop) on a relatively powerful stationary PC (Intel Core i7 920, 12GB ram). My purpose is highspeed imaging with a pointgrey Grashopper machine-vision camera (for research, PhD project). This camera is capable of 200 fps at full VGA (640x480) resolution. The camera is connected using Firewire (1394b) and the drivers and software from Pointgrey works great. I have developed a console C++ application that can grap a certain number of frames to preallocated memory and after this also save the grapped frames to harddrive. Currently it works fine but sometimes I am observing a few framedrops (1-3). When this happens I reset the experiment and repeat the recording and usually i am lucky the second time with no framedrops (the camera-driver has a internal framecounter that I am using). Question: I usually go to tty1 and use "sudo service gdm stop" to disable the graphical frontend. It seems to release some memory though that is not my main concern. My concern is CPU resources. Are there other system hungry modules that can be disabled temporarily such that the CPU gets less busy on Ubuntu 9.10? At some point in the future I will update to 10.10. Should I perhaps option for the server edition instead? Thanks.

    Read the article

  • Oracle Virtual Server OEL vm fails to start - kernel panic on cpu identify

    - by Towndrunk
    I am in the process of following a guide to setup various oracle vm templates, so far I have installed OVS 2. 2 and got the OVM Manager working, imported the template for OEL5U5 and created a vm from it.. the problem comes when starting that vm. The log in the OVMM console shows the following; Update VM Status - Running Configure CPU Cap Set CPU Cap: failed:<Exception: failed:<Exception: ['xm', 'sched-credit', '-d', '32_EM11g_OVM', '-c', '0'] => Error: Domain '32_EM11g_OVM' does not exist. StackTrace: File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2531, in xen_set_cpu_cap run_cmd(args=['xm', File "/opt/ovs-agent-2.3/OVSCommons.py", line 92, in run_cmd raise Exception('%s => %s' % (args, err)) The xend.log shows; [2012-11-12 16:42:01 7581] DEBUG (DevController:139) Waiting for devices vtpm [2012-11-12 16:42:01 7581] INFO (XendDomain:1180) Domain 32_EM11g_OVM (3) unpaused. [2012-11-12 16:42:03 7581] WARNING (XendDomainInfo:1907) Domain has crashed: name=32_EM11g_OVM id=3. [2012-11-12 16:42:03 7581] ERROR (XendDomainInfo:2041) VM 32_EM11g_OVM restarting too fast (Elapsed time: 11.377262 seconds). Refusing to restart to avoid loops .> [2012-11-12 16:42:03 7581] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=3 [2012-11-12 16:42:12 7581] DEBUG (XendDomainInfo:2230) Destroying device model [2012-11-12 16:42:12 7581] INFO (image:553) 32_EM11g_OVM device model terminated I have set_on_crash="preserve" in the vm.cfg and have then run xm create -c to get the console screen while booting and this is the log of what happens.. Started domain 32_EM11g_OVM (id=4) Bootdata ok (command line is ro root=LABEL=/ ) Linux version 2.6.18-194.0.0.0.3.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Mar 29 18:27:00 EDT 2010 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000180800000 (usable)> No mptable found. Built 1 zonelists. Total pages: 1574912 Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 1600.008 MHz processor. Console: colour dummy device 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB disabled Memory: 6155256k/6299648k available (2514k kernel code, 135548k reserved, 1394k data, 184k init) Calibrating delay using timer specific routine.. 4006.42 BogoMIPS (lpj=8012858) Security Framework v1.0.0 initialized SELinux: Initializing. selinux_register_security: Registering secondary module capability Capability LSM initialized as secondary Mount-cache hash table entries: 256 CPU: L1 I Cache: 64K (64 bytes/line), D cache 16K (64 bytes/line) CPU: L2 Cache: 2048K (64 bytes/line) general protection fault: 0000 [1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.18-194.0.0.0.3.el5xen #1 RIP: e030:[ffffffff80271280] [ffffffff80271280] identify_cpu+0x210/0x494 RSP: e02b:ffffffff80643f70 EFLAGS: 00010212 RAX: 0040401000810008 RBX: 0000000000000000 RCX: 00000000c001001f RDX: 0000000000404010 RSI: 0000000000000001 RDI: 0000000000000005 RBP: ffffffff8063e980 R08: 0000000000000025 R09: ffff8800019d1000 R10: 0000000000000026 R11: ffff88000102c400 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffffffff805d2000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff80642000, task ffffffff804f4b80) Stack: 0000000000000000 ffffffff802d09bb ffffffff804f4b80 0000000000000000 0000000021100800 0000000000000000 0000000000000000 ffffffff8064cb00 0000000000000000 0000000000000000 Call Trace: [ffffffff802d09bb] kmem_cache_zalloc+0x62/0x80 [ffffffff8064cb00] start_kernel+0x210/0x224 [ffffffff8064c1e5] _sinittext+0x1e5/0x1eb Code: 0f 30 b8 73 00 00 00 f0 0f ab 45 08 e9 f0 00 00 00 48 89 ef RIP [ffffffff80271280] identify_cpu+0x210/0x494 RSP ffffffff80643f70 0 Kernel panic - not syncing: Fatal exception clear as mud to me. are there any other logs that will help me? I have now deployed another vm from the same template and used the default vm settings rather than adding more memory etc - I get exactly the same error.

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • Inverted schedctl usage in the JVM

    - by Dave
    The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker. We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time.

    Read the article

  • WinAPI window taking 50% of CPU when idle

    - by henryprescott
    I'm currently working on a game that creates a window using WindowsAPI. However, at the moment the process is taking up 50% of my CPU. All I am doing is creating the window and looping using the code found below: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { MSG message = {0}; WNDCLASSEX wcl = {0}; wcl.cbSize = sizeof(wcl); wcl.style = CS_OWNDC | CS_HREDRAW | CS_VREDRAW; wcl.lpfnWndProc = WindowProc; wcl.cbClsExtra = 0; wcl.cbWndExtra = 0; wcl.hInstance = hInstance = hInstance; wcl.hIcon = LoadIcon(0, IDI_APPLICATION); wcl.hCursor = LoadCursor(0, IDC_ARROW); wcl.hbrBackground = 0; wcl.lpszMenuName = 0; wcl.lpszClassName = "GL2WindowClass"; wcl.hIconSm = 0; if (!RegisterClassEx(&wcl)) return 0; hWnd = CreateAppWindow(wcl, "Application"); if (hWnd) { if (Init()) { ShowWindow(hWnd, nShowCmd); UpdateWindow(hWnd); while (true) { while (PeekMessage(&message, 0, 0, 0, PM_REMOVE)) { if (message.message == WM_QUIT) break; TranslateMessage(&message); DispatchMessage(&message); } if (message.message == WM_QUIT) break; if (hasFocus) { elapsedTime = GetElapsedTimeInSeconds(); lastEarth += elapsedTime; lastUpdate += elapsedTime; lastFrame += elapsedTime; lastParticle += elapsedTime; if(lastUpdate >= (1.0f / 100.0f)) { Update(lastUpdate); lastUpdate = 0; } if(lastFrame >= (1.0f / 60.0f)) { UpdateFrameRate(lastFrame); lastFrame = 0; Render(); SwapBuffers(hDC); } if(lastEarth >= (1.0f / 10.0f)) { UpdateEarthAnimation(); lastEarth = 0; } if(lastParticle >= (1.0f / 30.0f)) { particleManager->rightBooster->Update(); particleManager->rightBoosterSmoke->Update(); particleManager->leftBooster->Update(); particleManager->leftBoosterSmoke->Update(); particleManager->breakUp->Update(); lastParticle = 0; } } else { WaitMessage(); } } } Cleanup(); UnregisterClass(wcl.lpszClassName, hInstance); } return static_cast<int>(message.wParam); } So even when I am not drawing anything when the window has focus it still takes up 50%. I don't understand how this is taking up so much system resources. Am I doing something wrong? Any help would be much appreciated, thank you!

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Setting Key Usage attributes with Makecert

    - by nlawalker
    Is it possible to set Key Usage attributes using makecert, or any other tool I can use to generate my own test certificates? The reason I'm interested is that certificates used for BizTalk Server AS2 transport require a key usage of Digital Signature for signing and Data Encipherment or Key Encipherment for encryption/decryption, and I want to play around with this feature. I see how to set enhanced key usage attributes with makecert, but not key usage.

    Read the article

  • Do programmable Ethernet devices (think onboard CPU) really exist?

    - by PeterM
    I've heard from various people that programmable Ethernet cards exist and are easily available. However I have yet to be able to track down one of these mythical devices so I'm wondering if they're just that - a myth. Such a programmable card has a gigabit Ethernet interface, has a programmable CPU and connects to the host system via PCI Express. The problem area these cards address are low latency network applications where the card itself does the work and "reports back" to the operating system. Basically the card acts as a co-processor and handles all the low latency requirements on the card, thus avoiding the issues of writing low latency code in user-land - think 0.4ms - 0.5ms response times. So my question is, do these cards really exist and if so, where can I get my hands on one?

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >