Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 104/422 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • links for 2010-06-02

    - by Bob Rhubart
    @eelzinga: Oracle Service Bus 11g communication with Oracle SOA Suite 11g, DirectBindings, part1 Oracle ACE Erikc Elzinga launches a series of post in which he will describe how to develop various  Oracle Service Bus 11g to Oracle SOA Suite  process flows. (tags: oracle otn oracleace soa servicebus) @Atul_Kumar: Integrate UCM (ECM/Content Server) with Microsoft Active Directory as LDAP Provider Atul Kumar's step-by-step instructions. (tags: oracle otn enterprise2.0 ucm ecm ldap) Stefan Hinker: Is my application a good fit for CMT? "The first and most important criterion for suitability is always the service time of your application," says Stefan Hinker.  "If this is sufficient, then the application is OK on CMT. If it is not, and the reason is actually the CPU and not some other high-latency component (like a remote database), you will need to test on other CPU architectures." (tags: oracle sun cpu cmt sparc solaris) @deltalounge: Definitions of Services and Processes Peter Paul shares a collection of useful definitions gathered from the works of many of the big thinkers in the SOA space.  (tags: oracle otn soa businessprocess) OTN TechCast: Oracle Solaris Virtualization - Oracle Solaris Video Joost Pronk, CTO for Oracle Solaris Product Management, provides an overview of the robust virtualization functionality built into the Oracle Solaris OS. (tags: oracle otn solaris virtualization)

    Read the article

  • How can I get the Creative Zen Touch to work?

    - by Hello71
    I've tried using gnomad2 (too lazy to configure all the dependencies) and Amarok (segfaulted when I tried to do anything with it). $ lsusb Bus 002 Device 004: ID 041e:4131 Creative Technology, Ltd Zen Touch (mtp) # SNIP OUTPUT # $ lsusb --verbose -s 002:004 Bus 002 Device 004: ID 041e:4131 Creative Technology, Ltd Zen Touch (mtp) Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x041e Creative Technology, Ltd idProduct 0x4131 Zen Touch (mtp) bcdDevice 1.00 iManufacturer 1 Creative Technology Ltd iProduct 2 Creative Zen Touch iSerial 3 010125517D039098 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 39 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 16 Configuration 1 bmAttributes 0xc0 Self Powered MaxPower 440mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 3 bInterfaceClass 0 (Defined at Interface level) bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 33 MTP Interface Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 4 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 can't get debug descriptor: Connection timed out Device Status: 0x0000 (Bus Powered)

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • conky stopped displaying after daul monitors setup -- works when I detect monitors

    - by synaptik
    I just recently installed Ubuntu 12.04 on a clean install. I previously was using 11.10. I am also using a new laptop with a Dell docking-station and two external monitors. When I try to use the .conkyrc file that I used previously, my conky display simply doesn't show up anywhere. However, after I went to System Settings Displays and made some slight change that caused the monitors to refresh, then conky appeared as it should. Here is my .conkyrc file: background yes use_xft yes xftfont DejaVu Sans Mono:size=8 xftalpha 0.8 out_to_console no update_interval 2.0 total_run_times 0 draw_shades no short_units yes # Create own window instead of using desktop (required in nautilus) own_window yes # If own_window is yes, you may use type normal, desktop or override own_window_type override # Use pseudo transparency with own_window? own_window_transparent yes double_buffer yes default_color f0e68c color1 white color2 AD0303 alignment bottom_left gap_x 2 gap_y 30 no_buffers yes use_spacer right pad_percents 3 xftfont Terminus:size=10 TEXT $stippled_hr cpu1: ${color1}${cpu cpu1}% ${color} cpu2: ${color1}${cpu cpu2}% ${color} load: ${color1}$loadavg ${color} hot proc: ${color1}${top cpu 1}% - ${top name 1}${color} $stippled_hr big proc: ${color1}${top_mem mem_res 1} - ${top_mem name 1}${color} memory: ${color1}$mem/$memmax $memperc%${color} $stippled_hr disk: ${color1}${fs_used /}/${fs_size /}${color} swap: ${color1}${swap}/${swapmax}${color} ${diskiograph_read 15,120 color1 0077ff 750} ${diskiograph_write 15,120 color1 0077ff 750} $stippled_hr download: ${color1}${downspeed wlan0} /s${color} ${downspeedgraph eth0 20,120 104E8B 0077ff} upload: ${color1}${upspeed wlan0} /s${color} ${upspeedgraph eth0 20,120 104E8B 0077ff} How can I fix it so that I don't have to tamper with the Displays settings in order for conky to show up?

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

  • Announcing Oracle Environmental Accounting and Reporting

    - by Theresa Hickman
    Oracle just launched a new product called Oracle Environmental Accounting and Reporting designed to help company’s track and report their greenhouse emissions at the operational level.  Companies around the world are facing increasing pressure to improve their energy efficiency and reduce waste in their operations. Also, new worldwide greenhouse gas legislation is putting added pressure on companies to report the impact of their emissions and energy usage on the environment. Today, companies undergo extensive and expensive data audits to maintain a ledger of up-to-date emissions factors that compare figures on an annual basis. Existing “ad hoc” approaches utilizing manual or niche solutions have a high operational cost and weak data security and audit-ability. The ideal solution is to embed environmental usage within the mainstream business operations, such as recording energy usage at the time of invoice entry, and then report on those results. This is precisely what Oracle Environmental Accounting and Reporting is designed to do. You can now capture environmental data either electronically or manually; convert that to greenhouse gas emissions; comply with mandatory and voluntary greenhouse gas reporting schemes; and identify opportunities for CO2 emissions and cost reductions.   Oracle recently acquired the intellectual property for this solution which works with both Oracle E-Business Suite Release 12 and JD Edwards EnterpriseOne Release 9.0. For more information, visit Oracle Environmental Accounting and Reporting.

    Read the article

  • Why does 'top' say my machine is only 50% idle?

    - by Chris Moore
    What's going on here? I'm running nothing on the system, iotop and iftop show the network and hard drive are both idle, and top (sorted by %CPU) shows nothing running. So why is the system only 50% idle? What's the other 50% waiting for? How can I find out? top - 12:01:05 up 3 days, 15:03, 1 user, load average: 6.00, 6.01, 6.05 Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.0%sy, 0.0%ni, 49.7%id, 49.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2053996k total, 1992600k used, 61396k free, 81680k buffers Swap: 4092924k total, 10740k used, 4082184k free, 1338636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1042 deb 20 0 21468 1412 1000 R 1 0.1 0:00.03 top 1 root 20 0 24188 1952 1152 S 0 0.1 0:01.44 init 2 root 20 0 0 0 0 S 0 0.0 0:00.05 kthreadd Update: dmesg shows the printer driver misbehaving: [28858.561847] cnijnetprn[1503]: segfault at 29 ip 00007f56cf3480f7 sp 00007fffb964ec30 error 4 in libcnnet.so.1.2.0[7f56cf345000+9000] [68851.187802] cnijnetprn[9180]: segfault at 29 ip 00007ffe7636a0f7 sp 00007fff9a8b1990 error 4 in libcnnet.so.1.2.0[7ffe76367000+9000] [155412.107826] cnijnetprn[19966]: segfault at 29 ip 00007fc31de770f7 sp 00007fffc03aa8e0 error 4 in libcnnet.so.1.2.0[7fc31de74000+9000] and also some issue with cp: [248041.172067] INFO: task cp:27488 blocked for more than 120 seconds. [248041.172071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [248041.172075] cp D ffffffff81805120 0 27488 27345 0x00000004 [248041.172080] ffff880078d57a38 0000000000000046 ffff880078d579d8 ffffffff81032a79 [248041.172085] ffff880078d57fd8 ffff880078d57fd8 ffff880078d57fd8 0000000000012a40 [248041.172090] ffff88007b818000 ffff880069acc560 ffff880078d57a18 ffff88007f8532c0 [248041.172095] Call Trace: [248041.172104] [<ffffffff81032a79>] ? default_spin_lock_flags+0x9/0x10 [248041.172109] [<ffffffff8110a360>] ? __lock_page+0x70/0x70 [248041.172114] [<ffffffff815f0ecf>] schedule+0x3f/0x60 I did try copying something to the USB stick that's plugged into the router and mounted onto this computer using mount.cifs. That almost always causes everything to lock up, so I'm guessing that's the problem. I'll reboot and stop using mount.cifs.

    Read the article

  • Would I be able to use code hosting services to host malware code?

    - by NlightNFotis
    Let me start by saying that I am a computer security researcher. Part of my job is to create malware to deploy on a controlled environment in order to study or evaluate several aspects of computer security. Now, I am starting to think that using an online code hosting service (such as BitBucket, Github, etc...) to have all my code in 1 place, would allow me to work on my projects more efficiently. My question is: Are there any issues with this? I have studied those companies' privacy policies, and they state that they allow usage of their services for lawful usage. Since I am not distributing malware, but I am only using it on my machines and machines that I am authorized to use, aren't I allowed to use the service? For the usage that I am doing, malware is the same as any other software. I recognise that I should be extremely careful with code hosting, as any mistake from my part could hold me liable for damages and leave me open against legal action. As such I am recognizing that I should use private repositories, so the code is not available to the public. But how private is a private repository? How can I trust that companies like them will not leak or sell a potential (electronic) viral weaponry that I may have created in the future?

    Read the article

  • Laptop battery life drastically decreased compared to Windows 7

    - by Aron Rotteveel
    I am running Ubuntu 10.10 on my Dell Studio XPS 1640 and have about one hour of battery life in it, compared to about 2.5 hours running on Windows 7. This is with wireless and bluetooth on, but still, the difference seems incredible. What could be causing such a difference and is there a way to close the gap without losing core functionality? EDIT: here's some output from powertop. This is with bluetooth turned off and Wifi turned on. The output seems pretty normal to me, but as indicated, this is about 1 hour of battery life on a full battery... Wakeups-from-idle per second : 476.2 interval: 10.0s Power usage (ACPI estimate): 2.5W (1.2 hours) Top causes for wakeups: 30.0% (167.2)D chrome 21.0% (117.3) [extra timer interrupt] 13.9% ( 77.4) [kernel scheduler] Load balancing tick 3.4% ( 18.9)D xchat 7.1% ( 39.8) [iwlagn] <interrupt> 5.9% ( 32.9) AptanaStudio3 3.9% ( 21.6)D java 2.7% ( 14.9) [TLB shootdowns] <kernel IPI> 2.5% ( 14.1) docky 1.8% ( 10.0) nautilus 1.6% ( 9.0) thunderbird-bin 1.0% ( 5.5) [ahci] <interrupt> 0.9% ( 5.0) syndaemon 0.8% ( 4.3) [kernel core] hrtimer_start (tick_sched_timer) EDIT: after changing /proc/sys/vm/laptop_mode to 5 (it was set to 0), wakeups seem to have decreased, although usage still seems far too high: Wakeups-from-idle per second : 263.8 interval: 10.0s Power usage (ACPI estimate): 2.6W (0.9 hours) EDIT: I seem to have discovered the main cause: I was using the open source ATI Drivers. I recently installed the official ATI drivers and laptop battery life seems to have doubled since. EDIT: last edit. The previous 'solution' of installing the official ATI drivers turns out to be a non-solution. Although it does increase battery life, my laptop resolution is maxed out at 1200x800 after a reboot. (Please note that this problem does not need answering in this question as it is a seperate case)

    Read the article

  • Repeat use of Schema / Rich Snippets Markup i.e LocalBusiness Data

    - by bybe
    I am unable to find official wording and I'm hoping that some Rich Snippets/Schema Guru can give me some insight into proper usage of repeated content when it comes to using markup. I'm building a site that wants to use Schema as the markup type and the owner would like as much usage as possible. The business name, telephone and address will appear on every page now is it valid or even useful to use Rich Snippets on every page where this information is displayed. For example this information appears in the header, and footer of every page of the site and too give you an example of my current markup see below: <body itemscope itemtype="http://schema.org/LocalBusiness"> <header> <a itemprop="url" href="http://www.domain.co.uk/"> <img itemprop="logo" src="image.png" alt="Company Name Logo" /> </a> <span itemprop="telephone">01202 000 000</span> </header> <div> This is where the content will go</div> <footer> <span itemprop="name">Company Name</span> <span itemprop="description"> A small little bit about this company</span> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">Address Goes here</span> <span itemprop="addressLocality">Area Here</span>, <span itemprop="addressRegion">Region Here</span> </div> </footer> </body> !-- Local Business Schema Now Closed --> So as you can see above this information will be displayed on every single page.... Is this valid or bad to repeat usage of this information in schema format...

    Read the article

  • Fix for poor hd playback for 11.04 upwards

    - by mark kirby
    Hi guys ive seen loads of posts on this site about poor 720/1080p playback in recent ubuntu versions I had this problem and fixed it so I thought id share it with everyone.... 1 install mplayer 2 install smplayer frontend {in software center} 3 open smplayer 4 go to "OPTIONS" then "PREFRENCES" then "GENRAL" 5 if you have a nvidia card choose "OUTPUT DRIVER" and select "VDPAU" {for ATI or AMD choose xv (0 - ATI Radeon AVIVO video) I dont know if this will work as my card is nvidia but it should) 6 go to performance on the left hand side and set both local and streaming cache to 99999 (this may also fix dvd playback if you set that cache aswell} 7 check the box for "ALLOW HARD FRAME DROP" and set "LOOP FILTER" to skip only on HD 8 Set the "THREDS FOR DECODING OPTION TO THE NUMBER OF CORES YOUR CPU HAS IF YOU HAVE MORE THAN ONE CPU ADD UP ALL THE CORES FOR BEST PERFORMANCE" 9 Enjoy you HD movies again on ubuntu...... I have a pretty avrage machine heres my spec.... 2x Pentium 4 ht 3 ghz Stock dell power and motherboard GFORCE 310 HDMI 24 inch full HD tv as a monitor so any one with dule core cpu should have no problems getting this to work. hope this helps someone out.

    Read the article

  • How successful is GPL in reaching its goals?

    - by StasM
    There are, broadly, two types of FOSS licenses when it relates to commercial usage of the code - let's say the GPL-type and the BSD-type. The first is, broadly, restrictive about commercial usage (by usage I also mean modification and redistribution, as well as creating derived works, etc.) of the code under the license, and the second is much more permissive. As I understand, the idea behind GPL-type licenses is to encourage people to abandon the proprietary software model and instead convert to the FOSS code, and the license is the instrument to entice them to do so - i.e. "you can use this nice software, but only if you agree to come to our camp and play by our rules". What I want to ask is - was this strategy successful so far? I.e. are there any major achievements in the form of some big project going from closed to open because of GPL or some software being developed in the open only because GPL made it so? How big is the impact of this strategy - compared, say, to the world where everybody would have BSD-type licenses or release all open-source code under public domain? Note that I am not asking if FOSS model is successful - this is beyond question. What I am asking is if the specific way of enticing people to convert from proprietary to FOSS used by GPL-type and not used by BSD-type licenses was successful. I also don't ask about the merits of GPL itself as the license - just about the fact of its effectiveness.

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • what's the difference between Routed Events and Attached Events?

    - by vverma01
    I tried to find through various sources but still unable to understand difference between routed events and attached events in WPF. Most of the places of reference for attached event following example is used: <StackPanel Button.Click="StackPanel_Click"> <Button Content="Click Me!" Height="35" Width="150" Margin="5" /> </StackPanel> Explained as: stack panel do not contain Click event and hence Button.Click event is attached to Stack Panel. Where as msdn says: You can also name any event from any object that is accessible through the default namespace by using a typename.event partially qualified name; this syntax supports attaching handlers for routed events where the handler is intended to handle events routing from child elements, but the parent element does not also have that event in its members table. This syntax resembles an attached event syntax, but the event here is not a true attached event. Instead, you are referencing an event with a qualified name. According to MSDN information as pasted above, the above example of Buttons and StackPanel is actually a routed event example and not true attached event example. In case if above example is truly about usage of attached event (Button.Click="StackPanel_Click") then it's in contradiction to the information as provided at MSDN which says Another syntax usage that resembles typename.eventname attached event syntax but is not strictly speaking an attached event usage is when you attach handlers for routed events that are raised by child elements. You attach the handlers to a common parent, to take advantage of event routing, even though the common parent might not have the relevant routed event as a member. A similar question was raised in this Stack Overflow post, but unfortunately this question was closed before it could collect any response. Please help me to understand how attached events are different from routed events and also clarify the ambiguity as pointed above.

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • OUCH! Laptop running SUPER HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu editions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! (left side of terminal) (right side of terminal) This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • How Do Computers Work? [closed]

    - by Rob P.
    This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • cannot boot ubuntu 13.10 with my usb, Can i change the kernal on my laptop to run it?

    - by Carlos Dunick
    Currently i am running 12.04 and looking for an upgrade to 13.10 I first tried a bootable 64bit usb and failed. With the message saying "Kernal requires an x86-64 CPU but only detected an I686 CPU Unable to boot please use a kernal appropriate for your CPU" then tried 32bit and same message came up. Is this due to my laptop simply being to slow? or can/should i change the kernal somehow? Acer Aspire 5710z Intel Pentium dual core processor, 1.73Ghz, 533 MHz FSB, 1 MB L2 cache. 2GB DDR2 lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) 00:1c.2 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 3 (rev 02) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) 04:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) 06:00.0 FLASH memory: ENE Technology Inc ENE PCI Memory Stick Card Reader Controller 06:00.1 SD Host controller: ENE Technology Inc ENE PCI SmartMedia / xD Card Reader Controller 06:00.2 FLASH memory: ENE Technology Inc Memory Stick Card Reader Controller 06:00.3 FLASH memory: ENE Technology Inc ENE PCI Secure Digital / MMC Card Reader Controller

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Interface (contract), Generics (universality), and extension methods (ease of use). Is it a right design?

    - by Saeed Neamati
    I'm trying to design a simple conversion framework based on these requirements: All developers should follow a predefined set of rules to convert from the source entity to the target entity Some overall policies should be able to be applied in a central place, without interference with developers' code Both the creation of converters and usage of converter classes should be easy To solve these problems in C# language, A thought came to my mind. I'm writing it here, though it doesn't compile at all. But let's assume that C# compiles this code: I'll create a generic interface called IConverter public interface IConverter<TSource, TTarget> where TSource : class, new() where TTarget : class, new() { TTarget Convert(TSource source); List<TTarget> Convert(List<TSource> sourceItems); } Developers would implement this interface to create converters. For example: public class PhoneToCommunicationChannelConverter : IConverter<Phone, CommunicationChannle> { public CommunicationChannel Convert(Phone phone) { // conversion logic } public List<CommunicationChannel> Convert(List<Phone> phones) { // conversion logic } } And to make the usage of this conversion class easier, imagine that we add static and this keywords to methods to turn them into Extension Methods, and use them this way: List<Phone> phones = GetPhones(); List<CommunicationChannel> channels = phones.Convert(); However, this doesn't even compile. With those requirements, I can think of some other designs, but they each lack an aspect. Either the implementation would become more difficult or chaotic and out of control, or the usage would become truly hard. Is this design right at all? What alternatives I might have to achieve those requirements?

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >