Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 161/361 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Async CTP Refresh for Visual Studio 2010 SP1 Released

    - by Reed
    The Visual Studio team today released an update to the Visual Studio Async CTP which allows it to be used with Visual Studio SP1.  This new CTP includes some very nice new additions over the previous CTP.  The main highlights of this release include: Compatibility with Visual Studio SP1 APIs for Windows Phone 7 Compatibility with non-English installations Compatibility with Visual Studio Express Edition More efficient Async methods due to a change in the API Numerous bug fixes New EULA which allows distribution in production environments Anybody using the Async CTP should consider upgrading to the new version immediately.  For details, visit the Visual Studio Asynchronous Programming page on MSDN.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • ORACLE OPENWORLD - DAY 3 LINUX SESSIONS and ICE CREAM SOCIAL

    - by Zeynep Koch
    It had been two days of amazing sessions but we have more to come.  Day 3 will bring following sessions for Oracle Linux fans: Wed, October 3rd: Hands On Lab: Oracle Linux Package Management, 10:15am, Marriot Salon, 14/15 YB level Hands On Lab: Oracle Linux Storage Management, 12:45pm, Marriot Salon, 14/15 YB level Why Switch to Oracle Linux, 3:30pm, Moscone South #270 We also have a great Ice Cream Social to cool you down in this weather. Visit our Oracle Linux Pavilion, Moscone South #1033 between 1-2pm to see Partners that support Oracle Linux and Oracle VM and grab your ticket for an ice cream sponsored by QLogic. We look forward to seeing you in these great events.

    Read the article

  • Google I/O 2010 - Making smart & scalable Wave robots

    Google I/O 2010 - Making smart & scalable Wave robots Google I/O 2010 - Making smart & scalable Wave robots Wave 201 David Byttow, Marcel Prasetya A smart robot must be able to store persistent data. Wave robots can store data in wave structures, like wavelets, datadocs, and annotations, instead of traditional datastores. A scalable robot must perform operations with minimal bandwidth. Wave robots can optimize by selecting the appropriate amount of context, the optimal events, and narrow filters for events. In this talk, we'll share best practices on data storage and scaling. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 0 ratings Time: 58:25 More in Science & Technology

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • Implementing a bit shift using AND, NOT, ADD [closed]

    - by fdart17
    I'm implementing a 16-bit left bit shift by r bits, and I only have access to AND, NOT and ADD. There are 3 condition codes, negative, zero and positive, which are set when you use any of these operations. How I went about it was : (1) And the number with 1000 0000 0000 0000 to set condition codes to positive if the most significant bit is 1. (2) Add the number with itself. This shifts bits one to the left. (3) If the MSB was 1, add 1 to the result. (4) Loop threw (1)-(3) r times. I'm wondering if anyone has any hints to some more efficient methods? Thanks!

    Read the article

  • Are your merchandise systems limiting growth? Oracle Retail's Merchandise Operations Management could be the answer

    - by user801960
    In this video, Lara Livgard, Director of Oracle Retail Strategy, introduces Oracle Retail Merchandise Operations Management (MOM), a set of integrated, modular solutions that support buying, pricing, inventory management and inventory valuation across a retailer’s channels, countries, and business models. MOM is the backbone of successful retail operations, providing timely and accurate visibility across the entire enterprise and enabling efficient supply-chain execution driven by plans and forecasts. It's modular architecture facilitates tailored and high-value implementations, giving retailers the information they need in order to offer a quality customer experience through a truly integrated multi-channel approach. Further information is available on the Oracle Retail website regarding Merchandise Operations Management.

    Read the article

  • Q&amp;A: Does it make sense to run a personal blog on the Windows Azure Platform?

    - by Eric Nelson
    I keep seeing people wanting to do this (or something very similar) and then being surprised at how much it might cost them if they went with Windows Azure. Time for a Q&A. Short answer: No, definitely not. Madness, sheer madness. (Hopefully that was clear enough) Longer answer: No because It would cost you a heck of a lot more than just about any other approach to running a blog. A site that can easily be run on a shared hosting solution (as many blogs do today) does not require the rich capabilities of Windows Azure. Capabilities such as simplified deployed and management, dedicated resources, elastic resources, “unlimited” storage etc. It is simply not the type of application the Windows Azure Platform has been designed for. Related Links: Q&A- How can I calculate the TCO and ROI when considering the Windows Azure Platform? Q&A- When do I get charged for compute hours on Windows Azure? Q&A- What are the UK prices for the Windows Azure Platform

    Read the article

  • about ubuntu upgrade for version 11.02

    - by Mr Myo MIn Hein
    In my Ubuntu version, I cannot get the essence of Ubuntu because my version is low. Most of the software cannot be used and some command are not really working. My 300GB HDD is not found. My primary HDD is 750GB, and has Windows 7 and Ubuntu on it in dual-boot. On Windows, a 450GB HDD for storage is found. The next question is about my launcher. It is not on the left side, so I cannot use it efficiently because it is in one third of my desktop.

    Read the article

  • System does not detect USB pendrives

    - by cshubhamrao
    This USB thing is driving me crazy. 2 problems in time span of 3 hours. Ok I was already trying to cope up with "wrong fs type, bad option, bad superblock" error while mounting FAT Drives when to my amazement I discovered that none of the USB Storage devices showed up in the system Useful outputs: - tail /var/log/syslog: root@shubham-pc:~# tail /var/log/syslog Nov 7 21:41:47 shubham-pc colord: device removed: sysfs-HP-v250w Nov 7 21:41:51 shubham-pc kernel: [ 3441.529542] usb 1-1: USB disconnect, device number 11 Nov 7 21:41:53 shubham-pc kernel: [ 3443.820029] usb 1-2: new high-speed USB device number 14 using ehci-pci Nov 7 21:41:54 shubham-pc kernel: [ 3443.952897] usb 1-2: New USB device found, idVendor=0781, idProduct=5530 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952905] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952909] usb 1-2: Product: Cruzer Nov 7 21:41:54 shubham-pc kernel: [ 3443.952913] usb 1-2: Manufacturer: SanDisk Nov 7 21:41:54 shubham-pc kernel: [ 3443.952917] usb 1-2: SerialNumber: 20060876420EC6016847 Nov 7 21:41:54 shubham-pc mtp-probe: checking bus 1, device 14: "/sys/devices/pci0000:00/0000:00:1d.7/usb1/1-2" Nov 7 21:41:54 shubham-pc mtp-probe: bus: 1, device: 14 was not an MTP device

    Read the article

  • How to monetize and/or protect framework rights?

    - by Arthur Wulf White
    I made a game engine/framerwork for ActionScript 3 that allows very efficient and flexible level design for Platformers, Tower Defense game, RPG's, RTS and racing games. The algorithms I used are new and are not available in any other level editor I've seen. What are the best ways to benefit myself and others with my new framework? It is written for ActionScript 3 so unless I translate it to C# I'm guessing it will be decompiled and used by others. I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief.

    Read the article

  • What is Logical Volume Management and How Do You Enable It in Ubuntu?

    - by Justin Garrison
    Logical Volume Management (LVM) is a disk management option that every major Linux distribution includes. Whether you need to set up storage pools or just need to dynamically create partitions, LVM is probably what you are looking for. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Outlook2Evernote Imports Notes from Outlook to Evernote Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Is there a modern (eg NoSQL) web analytics solution based on log files?

    - by Martin
    I have been using Awstats for many years to process my log files. But I am missing many possibilities (like cross-domain reports) and I hate being stuck with extra fields I created years ago. Anyway, I am not going to continue to use this script. Is there a modern apache logs analytics solution based on modern storage technologies like NoSQL or at least somehow ready to cope with large datasets efficiently? I am primarily looking for something that generates nice sortable and searchable outputs with the focus on web analytics, before having to write my own frontends. (so graylog2 is not an option) This question is purely about log file based solutions.

    Read the article

  • Hardware from Oracle, Pricing for Education (HOPE) Program: New version now available!

    - by Cinzia Mascanzoni
    With HOPE Version 5, Oracle offers education institutions even more unmatched savings on its award-winning systems products making it more affordable for educational institutions to create scalable, high-performing, and low TCO teaching and learning environments. With special discounts for you, on selected Sun products from Oracle, the net result is that you can assist your Resellers in reducing the impact on their customers' budget in two ways: • Lower the total cost for technology acquisition of systems and hardware, for the end user • Reduce the environmental impact of the educational institutions served by your Resellers, by running and maintaining a lower cost, more efficient infrastructure Start today to take advantage of the new release of this exciting program from Oracle. Check the EMEA VAD Resource Center for a description of the products and discounts offered to you and to find links to more detailed information about these Sun products.

    Read the article

  • Cannot mount Android phone and sync with Banshee

    - by Brett Alton
    I can't get my LG Optimus One to sync with Banshee. I read somewhere that the root needs to have an empty file called '.is_audio_player'. I did that and it still doesn't mount. I ran dmesg however and it appears that the card is unmounting before I even have a change to run Banshee. [ 7250.321359] usb 1-1.4: new high speed USB device using ehci_hcd and address 10 [ 7250.444795] scsi12 : usb-storage 1-1.4:1.0 [ 7251.567946] scsi 12:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 7251.568839] sd 12:0:0:0: Attached scsi generic sg3 type 0 [ 7252.232433] sd 12:0:0:0: [sdc] 15564800 512-byte logical blocks: (7.96 GB/7.42 GiB) [ 7252.233299] sd 12:0:0:0: [sdc] Write Protect is off [ 7252.233306] sd 12:0:0:0: [sdc] Mode Sense: 03 00 00 00 [ 7252.233309] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235658] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235666] sdc: sdc1 [ 7252.239132] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.239140] sd 12:0:0:0: [sdc] Attached SCSI removable disk [ 7272.573437] usb 1-1.4: USB disconnect, address 10 Suggestions?

    Read the article

  • Azure Full trust permissions

    - by kaleidoscope
    Under Windows Azure full trust, your role has access to a variety of system resources that are not available under partial trust File System Resources A role running in Windows Azure has permissions to read and write to certain file, directory, and volume resources on the server. These permissions are outlined in the following table.  File system resource Permission System root directory No access Subdirectories of the system root directory No access Windows directory Read access only Machine configuration files No access Service configuration file Read access only Local storage resource Full access Registry Resources The following table outlines permissions available to the role when accessing the registry while running in Windows Azure. HKEY_CLASSES_ROOT Read access HKEY_CURRENT_USER No access HKEY_LOCAL_MACHINE Read access HKEY_USERS Read access HKEY_CURRENT_CONFIG Read access More details can be found at: http://msdn.microsoft.com/en-us/library/dd573363.aspx   Amit, S

    Read the article

  • Google I/O 2012 - Deep Dive into the Next Version of the Google Drive API

    Google I/O 2012 - Deep Dive into the Next Version of the Google Drive API Ali Afshar, Ivan Lee This session discusses a number of best practices with the new Google Drive API. We'll cover how to properly sync files, how to manage sharing, and how to make your applications faster and more efficient than ever before. We'll go through an entire working application that exposes best practices. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 17 0 ratings Time: 45:50 More in Science & Technology

    Read the article

  • Extending the Value of Your Oracle Financials Applications Investment with Document Capture, Imaging and Workflow

    Learn how Oracles end-to-end document imaging system extends the value and increases the automation of your Oracle Financials applications by using intelligent capture and imaging technologies to streamline high volume operations like accounts payable. Oracle Imaging and Process Management 11g (Oracle I/PM 11g) offers an integrated system that digitizes paper invoices, intelligently extracts header information and line item details, initiates automated workflows, and enables in-context access to imaged invoices directly from Oracle Applications, including Oracle E-Business Suite Financials and PeopleSoft Enterprise Financial Management. Come hear more about these certfied, standards-based application integrations as well as how document imaging can help your organization achieve quick, measurable ROI, by increasing efficiencies across financial departments, and reducing costs related to paper storage and handling.

    Read the article

  • Why ISVs Run Applications on Oracle SuperCluster

    - by Parnian Taidi-Oracle
    Michael Palmeter, Senior Director Product Development of Oracle Engineered Systems, discusses how ISVs can easily run up to 20x faster, gain 28:1 storage compression, and grow presence in the market all without any changes to their code in this short video. One of the family of Oracle engineered systems products, Oracle SuperCluster provides maximum end-to-end database and application performance with minimal initial and ongoing support and maintenance effort, at the lowest total cost of ownership. Java or enterprise applications running on Oracle Database 11gR2 or higher and Oracle Solaris 11 can run up to 20X faster than traditional platforms on Oracle SuperCluster without any changes to their code.  Large number of customers are consolidating hundreds of their applications and databases on Oracle SuperCluster and are requiring their ISVs to support it. ISVs can become Oracle SuperCluster Ready and Oracle SuperCluster Optimized by joining the Oracle Exastack program. 

    Read the article

  • Oracle's Director NoSQL Database Product Management talks with ODBMS.ORG

    - by thegreeneman
    I was pinged by one of my favorite database technology sites today, ODBMS.ORG - informing that Dave Segleau, the Director of Oracle NoSQL Database product management spent some time talking with their editor Roberto Zicari about the product.   Its a great interview and I highly recommend the read.  I think its important to understand the connectivity that Oracle NoSQL Database (ONDB) has with BerkeleyDB, as it says a lot about the maturity of ONDB as it relates to data integrity and reliability.  BerkeleyDB has been living the NoSQL life since the beginning of this transition embracing the right tool for the job approach to data management.  Several of the biggest names in NoSQL ( e.g. LinkedIn's Voldemort ) built their NoSQL scale-out solutions leveraging the robust BerkeleyDB storage engine under their distribution architectures.  Oracle commercializing the same via ONDB makes perfect sense given the demonstrated need for this category of technology.

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • What will be a good python script (or your favorite language goes here) to test a system's performance and capabilities?

    - by dassouki
    Let's say you're in a computer store looking at 10 laptops, you want to really compare the system's capabilities. What will be an efficient "your fav language goes here" script that will allow you to do this? As an example, when I go to the store I usually open a macbook and a pro's terminal and write an equation in python, iterate it a million or so times, and time them. I like to compare the difference in time. What would be an ideal and simple script that can efficiently compare systems?

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • OpenWorld hands on labs: HOL9558, HOL9559 and HOL9870

    - by cpauliat
    In the upcoming event Oracle OpenWorld that will start in 2 days in San Francisco, Olivier Canonge, Simon Coter, Eric Bezille and I will run 3 hands on lab about Cloud using Oracle VM for X86 virtualization tool (details below) For each lab, a detailed document (in PDF format) explains all steps. If you don't have the opportunity to attend OpenWorld labs sessions, you can still run the labs at home or office using those documents. Lab 9558: Deploying Infrastructure as a Service (IaaS) with Oracle VM Session ID: HOL9558 Tuesday October 2nd, 2012, 10:15am – 11:15amLocation: Marriott Marquis - Salon 14/15PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files) Lab 9559: Virtualize and Deploy Oracle Applications Using Oracle VM Templates Session ID: HOL9559 Tuesday October 2nd, 2012, 11:45am – 12:45pmLocation: Marriott Marquis - Salon 14/15 PDF Document Lab 9870: x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL 9870 Wednesday, 3 Oct, 2012, 5:00 PM - 6:00 PMLocation: Marriott Marquis - Salon 14/15 PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files)

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >