Search Results

Search found 4258 results on 171 pages for 'maximum degree of paralle'.

Page 62/171 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • How can I switch memory modules to 1600 Mhz?

    - by Salvador
    Some months ago I bought 4 Memory modules of 4GB DDR3 1600 KINGSTON HYPERX. The official Kingston manual says: *This module has been tested to run at DDR3-1600 at a low latency timing of 9-9-9-27 at 1.65V.The SPD is programmed to JEDEC standard latency DDR3-1333 timing of 9-9-9. I cannot find which is the real speed of my memory modules. I normally get from several tools that the real speed is 1333 Mhz srs@ubuntu:~$ sudo dmidecode -t memory # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x005D, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 32 GB Error Information Handle: 0x005F Number Of Devices: 4 Handle 0x005C, DMI type 17, 28 bytes Memory Device Array Handle: 0x005D Error Information Handle: 0x0060 Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: ChannelA-DIMM0 Bank Locator: BANK 0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: Kingston Serial Number: 07288F23 Asset Tag: 9876543210 Part Number: 9905403-439.A00LF How can I switch memory modules to 1600 Mhz?

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Orange lance sa première offre 4G pour ses clients entreprises à 79Euro HT par mois

    Orange lance sa première offre 4G pour ses clients entreprises A 79€ HT par mois Hier, Orange a lancé ses premières offres 4G pour ses clients entreprises dans 4 villes : Lyon, Nantes, Lille et Marseille. Son objectif affiché est de faire de même dans une dizaine de villes d'ici avril 2013 La 4G propose une couverture réseau avec un débit maximum théorique allant jusqu'à 150 Mbit/s, « soit 10 fois plus vite qu'en 3G+ » commente l'opérateur qui rappelle que « la H+, qui permet des débits jusqu'à 3 fois plus rapides qu'en 3G+, couvre déjà 60% de la population ». Le déploiement du LTE s'est appuyé sur des partenariats avec Alcatel-Lucent, E...

    Read the article

  • Why does flush dns often fail to work?

    - by Sharen Eayrs
    C:\Windows\system32>ipconfig /flushdns Windows IP Configuration Successfully flushed the DNS Resolver Cache. C:\Windows\system32>ping beautyadmired.com Pinging beautyadmired.com [xxx.45.62.2] with 32 bytes of data: Reply from xxx.45.62.2: bytes=32 time=253ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=249ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=242ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=258ms TTL=49 Ping statistics for xxx.45.62.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 242ms, Maximum = 258ms, Average = 250ms My site should point to xx.73.42.27 I change the name server. It's been 3 hours. It still points to xxx.45.62.2 Actually what happen after we change name server anyway? Wait for what? I already flush dns. Why it still points to the wrong IP? Also most other people that do not have the DNS cache also still go to the wrong IP

    Read the article

  • 12c??? - Active Data Guard Far Sync

    - by Jian Zhang(??)
    ?? ================ Active Data Guard Far Sync?Oracle 12c????(???Far Sync Standby),Far Sync?????????????(Primary Database)?????????Far Sync??,??(Primary Database) ??(synchronous)??redo?Far Sync??,??Far Sync????redo??(asynchronous)???????(Standby Database)???????????????????????Far Sync????????,init?????????,???????? ??redo ????Maximum Availability??,???????????(Primary Database)?????????Far Sync??,??(Primary Database)??(synchronous)??redo?Far Sync??,???????(zero data loss),?????Far Sync????,??????,??????????????Far Sync????redo??(asynchronous)???????(Standby Database)? ??redo ????Maximum Performance??,???????????(Primary Database)?????????Far Sync??,??(Primary Database) ????redo?Far Sync??,??Far Sync???????redo?????????(Standby Database)????????????????(Standby Database)??redo???(offload)? Far Sync????Data Guard ????(role transitions)????,?switchover/failover?????12c????? ???????Data Guard ????,?switchover/failover,???????????????Far Sync??,??Far Sync???????????????????? ???Far Sync???????,??????????????2?Far Sync??,???????? ???????Far Sync????? Far Sync??? ================ ????Far Sync ================ 1. ??Data Guard,???11.2??,??????«Active Database Duplication for A standby database» 2. ????Far Sync??,Far Sync????????,init?????????,???????? ??Far Sync???????,?????: SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/controlfs01.ctl'; 3. ????redo?????Far Sync??,????LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cfs SYNC AFFIRM MAX_FAILURE=1 ALTERNATE=LOG_ARCHIVE_DEST_3 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg12cfs' 4. ??Far Sync??????redo???,??Far Sync??LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cs ASYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=dg12cs' 5. ????Far Sync???????,??????????????2?Far Sync??? 6. ???????: SQL> select * from  V$DATAGUARD_CONFIG; DB_UNIQUE_NAME       PARENT_DBUN       DEST_ROLE         CURRENT_SCN     CON_ID ------------------------------ ------------------------------     ----------------- ----------- ---------- dg12cfs                        dg12cp          FAR SYNC INSTANCE      682995          0 dg12cs                         dg12cfs         PHYSICAL STANDBY       682995          0 dg12cp                        NONE             PRIMARY DATABASE      683138          0 ????????????????:Oracle_12c_Active_Data_Guard_Far_Sync_v1.pdf

    Read the article

  • Software requirements specification, please help!

    - by Nicholas Chow
    For a school project, I had to create a SRS for a "fictional" application. However they did not show us what it exactly entails, and were very vague with explanations. The SRS asked of us has to have at least 5 functional requirements, 5 non functional requirements and 1 constraint. Now I have tried my best to make one however I think there are still a lot of mistakes in it. Could you all please look at it and provide me with some feedback on which parts I can improve or just tell me which parts are plain out wrong and how to make it better? (The project has a maximum of 12 pages so it is a bit long, I will post it below. FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers.

    Read the article

  • Should I implement slugs with my already fairly long URLs?

    - by Earlz
    I'm considering implementing slugs in my blog. My blog uses MongoDB. One of the side-effects of using MongoDB is that it uses relatively long hex string IDs. Example before: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac after: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac/improvements-on-barelymvc Of course, that's a relatively short title.. I have some longer ones, but intend to limit the maximum character limit for slugs to something reasonable. At what point does a URL become so long that it hurts SEO instead of improves it? In this case, should I leave my URLs alone, or add slugs?

    Read the article

  • Examples of interesting implementations of character stats?

    - by Tchalvak
    I've got this BBG going ( http://ninjawars.net ), and the character stats currently are simplistic. I'm looking to add a few stats to the current 1/2 (strength and maximum hitpoints, essentially). I've come up with: (strength (unchanged), speed, stamina, and some others that are somewhat interesting wildcard stats). However, I'm not satisfied with how boring the effects of some of these stats are, because they're very linear. Better stat, better effects of the stat, but the stats don't interact with each-other, there's no Rock-Paper-Scissors interaction, having more is always better all the time. So what I'd really like is to see examples of interesting character stats or effects of stats? Examples that I can think of off hand: Call of Cthulu's Insanity stat (things get really weird/chaotic if you start losing sanity) White Wolf stats, to a certain extent (the stats themselves have some basic effects, and all skills effectiveness base themselves off of stats as well) What are some other ways people have used stats to check out?

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • The Calmest IT Guy in the World

    - by Markus Weber
    Unplanned outages and downtime still result not only in major productivity losses, but also major financial losses. Along the same lines, if zero planned downtime and zero data loss are key to your IT environment and your business requirements, planning for those becomes very important, all while balancing between performance, high availability, and cost. Oracle Database High Availability technologies will help you achieve these mission-critical goals, and are reflected in Oracle's best practices offerings of the Maximum Availability Architecture, or MAA. We created three neat, short videos showcasing some typical use cases, and highlighting three important components (amongst many more) of MAA: Oracle Real Application Clusters (RAC) Oracle Active Data Guard Oracle Flashback Technologies Make sure to watch those videos here, and learn about challenges, and solutions, around High Availability database environments from a recent Independent Oracle Users Group (IOUG) survey. 

    Read the article

  • SSD Tweaks for Ubuntu 12.04

    - by Mustafa Erdinç
    I need to tweak my Dell XPS 13z SSD for maximum performance and life cycle than I read the solutions explained here, but it is for 11.10 and my fstab is different. For now my fstab is looks like this: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=abf5ce9e-bdb7-4b2f-a7bd-bbd9efa72a98 / ext4 errors=remount-ro 0 1 # /home was on /dev/sda2 during installation UUID=491427b2-7482-4483-b6eb-7c564b991aff /home ext4 defaults 0 2 # swap was on /dev/sda3 during installation #UUID=7551000d-e708-4e0f-9fd2-9f93119f63fb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 tmpfs /tmp tmpfs mode=1777 And my rc.local is looks like this: echo noop > /sys/block/sda/queue/scheduler echo deadline > /sys/block/sda/queue/scheduler echo 1 > /sys/block/sda/queue/iosched/fifo_batch exit 0 Do you have any suggestions, what should I do? Regards

    Read the article

  • Need help about Drag a sprite with Animation (Cocos2d)

    - by Zishan
    I want to play an animation when someone drag a sprite from it's default position to another selected position. If he drag half of the selected position then animation will be play half. Example, I have 15 Frames of a animation and have a projectile arm. The projectile arm can be rotate maximum 30°, if someone rotate the arm 2° then animation sprite will be show 2nd frame, if rotate 12° then animation sprite will be show 6th frame.... and so on. Also when he release the arm, then the arm will be reverse back to it's default position and animation frames also will be reverse back to it's default first frame. I am new on cocos2d.I know how to make an animation and how to drag a sprite but I have no idea how to do this. Can anyone Please give me any idea or any tutorial how to do this, it will be very helpful for me. Thank you in advance.

    Read the article

  • Brightness issues on MSI GT683

    - by Mitu
    I'm experiencing brightness related issues on my MSI GT683 (Intel Core i5-2410M, nVidia GeForce GTX 560M, Ubuntu 12.04 LTS x64, nVidia driver version 295.40). To start with, when I open brightness settings, the slider is working incorrectly - screen is the dimmest on about 1/3 of a bar, moving it to the left side increases brightness. Next, while pressing the keyboard buttons (Fn+up, Fn+down), brightness changes incorrectly - sometimes 2 steps up/down, but sometimes it's totally irregular - ex. in order to set the lowest brightness, I have to set the maximum, then hit Fn+down, then Fn+up. (after 2 moves from 8 to 2... What's even more strange, while changing brightness from keyboard on boot and on login screen, it works perfect. Tried adding acpi related switches to GRUB, but the brightness settings wouldn't work at all. Any help appreciated.

    Read the article

  • Could not apply stored configuration for monitors

    - by Hernantz
    Well this happened when I upgraded to Natty. Not only seems I can't change my resolution to higher than 1024x768 but it appears at the left and using only 70% of the monitor's width. I tried logging in but in ubuntu classic mode, and i was able to change it, but that trick did not work anymore. (May this be a compiz problem?) Anyways, here is my /var/log/Xorg.0.log http://pastebin.com/Ew4wwLab and lspci -nn | grep VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller [8086:27a2] (rev 03) I tried using xrandr for adding manually a resolution of 1280x1768 but without luck. Here is the xrandr output Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) TV1 disconnected (normal left inverted right x axis y axis) 1280x1024 (0xc6) 109.0MHz h: width 1280 start 1368 end 1496 total 1712 skew 0 clock 63.7KHz v: height 1024 start 1027 end 1034 total 1063 clock 59.9Hz

    Read the article

  • Best way to address this Magento issue

    - by robgt
    I am in need of some advice/pointers on how best to attack a problem I am faced with in Magento 1.4.0.1. Here is the scenario: Consider a product catalog of multiple thousands of products in which there are many and various retail markup percentages. There is a need to sell some of those products to other retailers, and I need a way to easily categorise them into percentrage markup groups, so that the other traders can easily purchase items from us without needing to call and confirm pricing. All products prices are added to the magento database including VAT - so the actual retail value is the starting point. I think this needs to be done by adding an attribute on the products that determines the maximum discount level that can be applied to any given product. Obviously, this would entail a massive amount of work to update every product. Is there a way to achieve what we need that I don't yet know about (within Magento)? Can single attribute values be mass-populated somehow, without affecting other attributes such as name/description/etc?

    Read the article

  • Slow internet browsing in Ubuntu

    - by Mark
    I have a dual boot set up with windows and ubuntu, when I'm using windows internet browsing is a lot faster than when I'm using ubuntu and I don't know why. It's like there's just a big latency rather than the maximum speed is lower, there's a big delay before anything happens when using Ubuntu, it happens with all websites all the time. I've never configured the internet connection because it just worked straight away. I have broadband connection through a router shared with some other computers, when we set up the router and internet connection everything was done with windows. Any ideas on what I could do to fix this? Thanks to anyone who replies.

    Read the article

  • Is big (as much as big) size display (Monitor) always better for Development?

    - by Jitendra Vyas
    Is bigger size display ( Monitor) always better for Development? I'm going to buy a new LCD Monitor. I mostly work in Adobe Photoshop, HTML, CSS, jQuery and Wordpress. Budget is not a problem. Many options are there for LCD Monitor SIZE My questions are Would it better for maximum size, or large size monitor are not good always? Would it better to buy 21.5 inch x 2 than one 30 inch monitor? Which monitor size would you would prefer between the size of 21.5 inch - 30 inch, if bugdet is not a problem?

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Can't change brightness on Toshiba l755 laptop

    - by albert
    I have a Toshiba l755 laptop, and I installed 12.04 64bit on it. The default brightness of the laptop is set to maximum value, and I can't change it. If I go to Configuration ? Brightness, the brightness doesn't change and neither does pressing the function keys. I tried changing some parameters of grub file, setting GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor". When changing the brightness from configuration, the value of /sys/class/backlight/toshiba/brightess changes, but screen remains the same. If I use the key functions, it changes the "actual_brightness" files. I don't know what else can I do, any suggestions?

    Read the article

  • How do I simplify terrain with tunnels or overhangs?

    - by KKlouzal
    I'm attempting to store vertex data in a quadtree with C++, such that far-away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data: How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?

    Read the article

  • Why doesn't my graphic card software support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. When I force it I only force the external monitor, the laptop is already at maximum 1366*768. I connect it throw VGA. ((The graphic card supports 1280*1024 in windows 7, #Fail)) alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

  • Algorithm to Solve Most of a Problem

    - by Mike G
    I need an Algorithm/Design Pattern that allows me to try to get the maximum number of rules followed. So I have a couple teams and I need to pair them with a referee and against each other into a round robin. There a rules on who can compete with who and who can judge who so I need to find the configuration that satisfies the most of these. Some rules are more important than others and are "worth more" when evaluating "what satisfies the most of them" There probably isn't a algorithm for this, but is there a design pattern that could help me maximize my chances of finding this configuration?

    Read the article

  • How can I approach creating an efficient algorithm for maximizing value with these specific constraints?

    - by sway
    I'm having trouble coming up with an approach that isn't n^2 for this problem. Here's a contrived, simplified version I've come up with: Let's say you're a company that needs 4 employees to launch in a new city, a manager, two salespeople, and a customer support rep, and you magically know how much impact every candidate will have and how much salary they require to take the job. Your table of potential employees looks something like this: Name Position Salary Impact Adam Smith Manager 60,000 11 Allison Brown Salesperson 40,000 9 Brad Stewart Manager 55,000 9 ...etc (thousands of records) What algorithmic approach can be taken to find the maximum "impact" while still filling all the positions and remaining under, say, a 200,000 budget? Thanks!

    Read the article

  • Application qos involving priority and bandwidth

    - by Steve Peng
    Our manager wants us to do applicaiton qos which is quite different from the well-known system qos. We have many services of three types, they have priorites, the manager wants to suspend low priority services requests when there are not enough bandwidth for high priority services. But if the high priority services requests decrease, the bandwidth for low priority services should increase and low priority service requests are allowed again. There should be an algorithm involving priority and bandwidth. I don't know how to design the algorithm, is there any example on the internet? Somebody can give suggestion? Thanks. UPDATE All these services are within a same process. We are setting the maximum bandwidth for the three types of services via ports of services via TC (TC is the linux qos tool whose name means traffic control).

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >