Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 148/412 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Please Help to bring back power to my machine

    - by Acess Denied
    I Have a samsung N150plus netbook That I have been using for a while now. I left it on and plugged to a wall outlet and went to bed. I dual boot ubuntu and win7. I tried to update the win7 to sp1 and I dozed off. I woke up and saw the machine has been booted to ubuntu and logged in as guest, which translate to mean one of my room mates have tried to use the machine and they all have denied using my machine. I tried to reboot to windows and then it appears to have no cpu, hard disk and cpu fan activity. only one led seems to come on when i plug it in. its only the led that indicate the machine is powered on powers steadily. I really cant afford to buy a new machine now and I need the machine to complete my last project in school for my last year. Help Please

    Read the article

  • JavaOne Session Report - Java ME SDK 3.2

    - by Janice J. Heiss
    Oracle Product Manager for Java ME SDK, Sungmoon Cho, presented a session, "Developing Java Mobile and Embedded Applications with Java ME SDK 3.2,” wherein he covered the basic new features of the Java ME Platform SDK 3.2, a state-of-the-art toolbox for developing mobile and embedded applications. The session began with a summary of the four main components of Java ME SDK. A device emulator allows developers to quickly run and test applications before commercialization. It supports CLDC/MIDP CLDC/IMP.NG and CLC/AGUI. A development environment assists writing, running debugging and deploying and enables on-device debugging. Samples provide developers with useful codes and frameworks. IDE Plugins – NetBeans and Eclipse – equip developers with CPU Profiler, Memory Monitor, Network Monitor, and Device Selector. This means that manual integration is no longer necessary. Cho then talked about the Java ME SDK’s on-device tooling architecture: * Java ME SDK provides an architecture ideal for on-device-debugging.* Device Manager plays the central role by managing different devices whether it is the emulator or a device that Oracle provides or recommends or a third party device as long as the devices have a Java Runtime that supports the protocol that is designated.* The Emulator provides an accurate emulation, since it uses the same code base used in Oracle’s Java ME runtime.* The Universal Emulator Interface (UEI) makes it easy for IDEs to detect the platform.He then focused on the Java ME SDK release highlights, which include: * Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. A full emulation for the runtime is provided.* Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). This is a new profile for embedded devices. * A new Custom Device Skin Creator.* An Eclipse plugin for CLDC/MIDP.* Profiling, Network monitoring, and Memory monitoring are now integrated with the NetBeans profiling tools.* Java ME SDK Update CenterCho summarized the main features: IDE Integration (NetBeans and Eclipse) enables developers to write, run, profile, and debug their applications on their favorite IDE. CPU ProfilerThis enables developers to more quickly detect the hot spot and where CPU time is being used. They can double click the method to jump directly into the source code.Memory Monitor Developers can monitor objects and memory usage in real time.Debugger on the Emulator and DeviceDevelopers can run their applications step by step, and inspect the variables to pinpoint the problem. The debugging can take place either on the emulator or the device.Embedded Application DevelopmentIMP-NG, Device Access, Logging, and AMS API Support are now available.On-Device ToolingConnect your device to your computer, and run and debug the application right on your device.Custom Device Skin CreatorDefine your own device and test on an environment that is closest to your target device. The informative session concluded with a demo that showed more concretely how to apply the new features in Java ME SDK 3.2.

    Read the article

  • Getting software development Jobs oversees [on hold]

    - by Mario Dennis
    I live in Jamaica and I am currently pursuing a Bsc. in Computer Information Science. I have worked on a few projects and have learn Struts 2, Play Framework, Spring, Mockito, JUnit, Backbone.js etc in my spear time. I have also learn about SOLID and DRY software development as well as architecting software system using Service Oriented Architecture and N-tier Architecture. What I want to know is given all of this can I get a job oversees before completing a degree, how difficult will it be, and what is the best way to go about doing it?

    Read the article

  • Get system info from C program?

    - by Hamid
    I'm writing a little program in C that I want to use to output some system stats to my HD44780 16x2 character display. The system I'll be working with is a Debian ARM system and, although irrelevant, the display is on the GPIO header.(The system is a Raspberry Pi). As an initial (somewhat unambitious) attempt, I'd like to start with something simple like RAM and CPU usage (I'm new to C). I understand that if I make external command calls I need to fork() and execve() (or some equiv that will let me return the results), what I would like to know is how I go about getting the information I want in a nice clean format that I can use. Surely I will not have to call (for e.g); free -h And then use awk or similar to chop out the piece I want? There must be a cleaner way? The question should be seen as more of a generic, what is best practice for getting info about the system in C (the RAM/CPU usage are just an initial example).

    Read the article

  • How do you deal with the details when reading code?

    - by upton
    After reading some projects, I find that it is not the architecture of the software that is really hard to know. It is not hard to figure out the architecture immediately if the project is clearly designed and implemented, if it's hard and never seen before, some day later I can find out some pattern similar to the one I read in the same domain. The difficulty is that the concepts and mechanism defined by the author are really hard to guess, and these concepts may spread in the whole project which makes it hard to grasp. The situation is normal and universal and you can ask questions to your colleague when in a company. However, it gets worse if nobody around you knows these details. How do you handle these details which block your reading?

    Read the article

  • PC noisy due to high load - why?

    - by Jinx
    I just installed an ubuntu on my pc(Dell Inspiration I560SR-358, with CPU E5700 3GHz, 4G memory, and NVIDIA GeForce G310). The pc becomes noisy before that it's quiet with a windows 7 on it. How come? How to set it to be quiet again in unbuntu 10.04. One of the two cpu usage is always 100%. I think that is the reason. //re-edit Everything gets ok after I restart the computer.But the fan is still running which makes it noisy, if i switch to windows 7, it becomes quiet again.

    Read the article

  • What actions does Ubuntu trigger when battery is low?

    - by blueyed
    When the battery is low, the screen gets dimmed after a few seconds already. This appears to be some special power-saving mode, and might be related to the time in org.gnome.settings-daemon.plugins.power.time-low (1200 seconds (20 minutes) the default). While this seems to get triggered by gnome-settings-daemon, I wonder what else Ubuntu does when this happens (e.g. via DBus listeners), or other event listeners that look for a "low battery" state. It seems like something in this regard causes Ubuntu / X / the system to behave more sluggish afterwards (when the laptop is on AC again), and I would like to look into what might be causing this. I could not find anything related via dconf-editor, e.g. in org.gnome.settings-daemon.plugins.power. It appears to get setup via idle_configure in plugins/power/gsd-power-manager.c, but it's probably something more related to something that listens on the DBus interface, which gets notified via e.g.: if (!g_dbus_connection_emit_signal (manager->priv->connection, NULL, GSD_POWER_DBUS_PATH, "org.freedesktop.DBus.Properties", "PropertiesChanged", props_changed, &error)) I could imagine that some "power saving" property gets set, but not unset when AC is available anymore and/or the battery is not low anymore. I have looked at the CPU governor setting (/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor), but it was ondemand. I am using gnome-settings-daemon with awesomeWM on Ubuntu 14.04. gnome-settings-daemon=3.8.6.1-0ubuntu11.1 I've also compared gsd's plugins/power/gsd-power-manager.c with the one from Debian's gnome-settings-daemon-3.12.1, but could not find anything obvious that might have been fixed/changed in this regard. I have managed to trigger the gnome-power-manager's gnome-settings plugin (which dims the screen etc), by patching upower and use it after killing the system's upower daemon. (note that it's probably only energy that is being used by gpm to calculate it by itself). It does not make the system become sluggish.. OTOH I have not heard the speaker's beeping, which might come from the BIOS, which might be involved here, too - or other programs using the kernel's interface on /sys/class/power_supply/BAT0/. --- src/linux/up-device-supply.c.orig 2014-06-07 16:48:32.735920661 +0200 +++ src/linux/up-device-supply.c 2014-06-07 16:48:39.391920525 +0200 @@ -821,6 +821,9 @@ supply->priv->energy_old_first = 0; } + percentage = 3.1f; + time_to_empty = 3*60; + energy = 5; g_object_set (device, "energy", energy, "energy-full", energy_full,

    Read the article

  • How to render Minecraft on the GPU?

    - by l0b0
    Hardware: Intel i7 AMD Radeon HD 6970 SSD with plenty of space 6 GB RAM Software OpenJDK 6, 7, and Oracle Java 7 (reproducible with all three) AMD Catalyst 12.8 and open source driver (reproducible with both) Ubuntu 12.04 x86_64 and older Minecraft 1.3.2 vanilla and older On this setup I am getting rubbish frame rates after a short while of playing, dropping from about 45-55 to 15 in a couple of minutes. CPU use is 40-45 even when rendering the opening screen at 1920x1280, and gameRenderer is using about 90% CPU when playing. Rather than trying to eke out a few more FPS out of an obviously broken rendering pipeline, I really hope to find a solution to make the GPU render Minecraft.

    Read the article

  • Examples of Android Joystick Controls? [on hold]

    - by KRB
    I can't seem to find any well executed code examples for Android joystick controls. Whatever it may be, algorithms, pseudo code, actual code examples, strategies, or anything to assist with the design and implementation of Android joystick controls; I can't seem to find anything decent on the net. What are some well executed examples? More specifically, Pseudo Code Current Examples Idea/Design Functionality Description Controller Hints Related Directly to Android Architecture What kind of classes will I have making this? Will there be only one? How would this be implemented to the game architecture? All things I am thinking about. Cheers! UPDATE I've found this on the subject Joystick Example1, though I am still looking for different examples/resources. Answered my own question with a link to the code of the above video. It's a fantastic start to Android Joystick Controls.

    Read the article

  • High temperature on my laptop with Radeon Mobility HD4670

    - by Lorthirk
    As almost everyone here, I guess, in these days I downloaded Quantal Quetzal to give it a try. However I noticed that my laptop runs fairly hot with cooling fans almost always on, even sitting in the desktop doing nothing. I downloaded XSensor to read temperature sensors, and I saw that while CPU stays on about 65°C, so quiet normal I guess, the GPU sits at 75°C. In comparison my actual Windows 7 installation, which dual boots witb Quantal, stays at 59°C CPU and 65°C. So I went reading and learned that AMD dropped support for my video card from fglrx package, and that fglrx-legacy won't support 1.13 Xorg, so I'm basically stuck with OSS drivers. So I was guessing if there's anything I can try, and if it's possible that the OSS drivers could be the cause of the high temperature?

    Read the article

  • VHDL gate basics

    - by balina
    Hello. I'm learning VHDL and I've come to a halt. I'd like to create a simple gate out of smaller gates (a NAND gate here). Here's the code: library IEEE; use IEEE.STD_LOGIC_1164.all; entity ANDGATE2 is port( x,y : in STD_LOGIC; z : out STD_LOGIC ); end ANDGATE2; architecture ANDGATE2 of ANDGATE2 is begin z <= x AND y; end ANDGATE2; library IEEE; use IEEE.STD_LOGIC_1164.all; entity NOTGATE1 is port( x : in STD_LOGIC; z : out STD_LOGIC ); end NOTGATE1; architecture NOTGATE1 of NOTGATE1 is begin z <= NOT x; end NOTGATE1; library ieee; use ieee.std_logic_1164.all; entity NANDGATE2 is port( x : in STD_LOGIC; y : in STD_LOGIC; z : out STD_LOGIC ); end NANDGATE2; architecture NANDGATE2 of NANDGATE2 is signal c, d: std_logic; component NOTGATE1 port( n_in : in STD_LOGIC; n_out : out STD_LOGIC ); end component; component ANDGATE2 port( a_in1, a_in2 : in STD_LOGIC; a_out : out STD_LOGIC ); end component; begin N0: ANDGATE2 port map(x, y, c); N1: NOTGATE1 port map(c, d); z <= d; end NANDGATE2; Here's the code from some tutorial I've been using as a template; it compiles with no problems. library ieee; use ieee.std_logic_1164.all; -- definition of a full adder entity FULLADDER is port ( a, b, c: in std_logic; sum, carry: out std_logic ); end FULLADDER; architecture fulladder_behav of FULLADDER is begin sum <= (a xor b) xor c ; carry <= (a and b) or (c and (a xor b)); end fulladder_behav; -- 4-bit adder library ieee; use ieee.std_logic_1164.all; entity FOURBITADD is port ( a, b: in std_logic_vector(3 downto 0); Cin : in std_logic; sum: out std_logic_vector (3 downto 0); Cout, V: out std_logic ); end FOURBITADD; architecture fouradder_structure of FOURBITADD is signal c: std_logic_vector (4 downto 0); component FULLADDER port ( a, b, c: in std_logic; sum, carry: out std_logic ); end component; begin FA0: FULLADDER port map (a(0), b(0), Cin, sum(0), c(1)); FA1: FULLADDER port map (a(1), b(1), C(1), sum(1), c(2)); FA2: FULLADDER port map (a(2), b(2), C(2), sum(2), c(3)); FA3: FULLADDER port map (a(3), b(3), C(3), sum(3), c(4)); V <= c(3) xor c(4); Cout <= c(4); end fouradder_structure; My code compiles with no errors, but with two warnings: # Warning: ELAB1_0026: p2.vhd : (85, 0): There is no default binding for component "andgate2".(Port "a_in1" is not on the entity). # Warning: ELAB1_0026: p2.vhd : (87, 0): There is no default binding for component "notgate1".(Port "n_in" is not on the entity). What gives?

    Read the article

  • VMWare Player pauses often

    - by pascal
    I'm using a 64bit Windows 8 inside vmplayer, with 2 virtual processor cores, virtual hard disk resides on a fast local disc and is not preallocated; host CPU is Intel i7 3770, should be capable of hardware virtualisation but I don't know if VMWare uses it; NAT networking; Sound card connected, USB connected, accelerated 3D graphics (NVidia 313.30 on host) My problem is, that the VM often pauses for a few seconds, and then speeds up for a few seconds to reach real time again. Time in the VM actually moves faster after the pause, for example all animations using timers speed up. When running, the vmware-vmx process shows ~150% CPU usage in top, but 0% when pausing (and D state i.e. waiting for IO). iotop shows normal disk writes from vmware-vmx threads, but during pauses, the flush kernel thread uses 99%. Are there some options to try so that VMWare doesn't wait for IO? I've tried a few things available from the GUI but the issue never went away…

    Read the article

  • Make public webcam. Which protocol, which codec. (Using VLC)

    - by gsedej
    Hi! I want to use my old (1GHz) PC as webcam video stream server (like you can see those road cameras). I thought of using VLC and already tried using http output but it was not really good. Too cpu hungry, too big stream (kBps), not stable... I been reading VLC how-to's but thre is still a question. Which output should I use? Http, RTSP, UDP? I want to make for more than one computer at the same time (multicast). Which codec should be good? PC is not so fast so it shouldn't be too cpu hungry codec. Mpeg2, mpeg4, xvid? how much video buffer should I use (vb=?)? What about setting IP and ports? So I need some help with ideas, but if someone can make a VLC command line it's even better :) Oh, computer has direct internet connection and own IP.

    Read the article

  • Consulting Expertise

    - by Oracle OpenWorld Blog Team
    Consult with the Experts Onsite at Oracle OpenWorld by Karen Shamban Learn from Oracle Consulting experts how to maximize the value of your Oracle investments by attending one or more Oracle Consulting sessions. Topics include cloud architecture and implementations, Engineered Systems best practices, Oracle Fusion Applications migrations, and more. Or, stop by the Oracle Consulting Center or the Demo Stations in the Exhibition Halls to ask specific questions and get additional information. Are you an IT executive or enterprise architect?  Register for the information-packed Enterprise Architecture Summit on Wednesday, October 12. To see the full range of Oracle Consulting activities at Oracle OpenWorld, click here.

    Read the article

  • Macbook 8.1 overheating

    - by timse201
    I have a macbook 8.1 with ubuntu 12.04 installed. But my cpu is getting very hot. On Mac my CPU is 50-60°C warm. But on ubuntu my mac is getting very hot and is by about 60°C but with min 3000rpm instead of 2000 on mac and the fan is getting very loud with 4500rpm on ubuntu when im browsing (without flash) or doing something else. i set it to 3000rpm because it is not getting so noisy instead of 2000rpm minimum. But thats not that what im expected. What ive done: i installed lm-sensors to see the temperatures and started the sensors-detect i installed macfancld, jupiter, the newest drivers from x-updates and installed the i965-va-driver oh and i installed mesa - with the default version my sandbridge was displayed as unknown i added GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force drm.vblankoffdelay=1 pcie_aspm=force drm.vblankoffdelay=1 i915.semaphores=1 i915.i915_enable_rc6=1 i915.i915_enable_fbc=1" ive added rfkill block bluetooth to /etc/rc.local to switch of bluetooth by default on boot my mac is not as noisy as before but it is noisy and sometimes very hot i hope you can help me

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • How to debug slow session start of Gnome 3?

    - by user65521
    After Upgrade from 11.10 to 12.04, the login process of Gnome 3 is extremely slow (It takes in the order of 60 seconds when it was in the order of a few seconds before the upgrade (Harddisk is a SSD!)). Running "top" in a VT shows that gnome-shell is producing about 90% CPU load while dbus-daemon is taking roughly 10%. The moment when CPU-load of gnome-shell drops to normal levels (around 2-3%) corresponds to the time the login process is terminated and the desktop is displayed. De-activating the four gnome-shell extensions (Alternative Status Menu, Quit Button, Remove Accessibility, system-monitor) that I have installed does not have any effect on session start up time. Login to Gnome classic does not show the slow session start. The system logs do not show anything suspicious. Thus, what is the best way to identify the underlying problem?

    Read the article

  • Embarcadero lance C++Builder XE3, la nouvelle plateforme de développement C++ multi-OS s'enrichit d'un compilateur 64 bits

    Embarcadero lance C++Builder XE3 La nouvelle plateforme de développement C++ multi-OS s'enrichit d'un compilateur 64 bits Embarcadero vient d'annoncer le lancement de C++Builder XE3, sa nouvelle plateforme de développement C++ multi-OS et multi-appareils. C++Builder XE3 s'appuie sur une nouvelle architecture de compilation native multi-ciblage. La nouvelle architecture du compilateur 64 bits fournit également certains des meilleurs standards C++11. L'outil intègre également un compilateur 64 bits et une mise à jour VCL pour les clients C++Builder existants, ce qui permet une mise à jour 64 bits pour les applications Windows existantes.

    Read the article

  • Logic - Time measurement

    - by user73384
    To measure the following for tasks- Last execution time and maximum execution time for each task. CPU load/time consumed by each task over a defined period informed by application at run time. Maximum CPU load consumed by each task. Tasks have following characteristics- First task runs as background – Event information for entering only Second task - periodic – Event information for entering and exiting from task Third task is interrupt , can start any time – no information available from this task Forth task highest priority interrupt , can start any time – Event information for entering and exiting from task Should use least possible execution time and memory. 32bit increment timer available for time counting. Lets prepare and discuss the logic, It’s OK to have limitations …! Questions on understanding problem statement are welcome

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >