Search Results

Search found 8687 results on 348 pages for 'per'.

Page 51/348 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Upcoming Carbon Tax in South Africa

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In 2012, the South Africa National Treasury announced the plan to impose a carbon tax to cut carbon emissions that are blamed for climate change. South Africa is ranked among the top 20 countries measured by absolute carbon dioxide emissions, with emissions per capita in the region of 10 metric tons per annum and over 90% of South Africa's energy produced by burning fossil fuels. The top 40 largest companies in the country are responsible for 207 million tons of carbon dioxide, directly emitting 20 percent of South Africa’s carbon output. The legislation, originally scheduled to be implemented from January 2015 to 31 December 2019, is now delayed to January 2016. It will levy a carbon tax of R120 (US$11) per ton of CO2, rising then by 10 percent a year until 2020, while all sectors bar electricity will be able to claim additional relief of at least 10 percent. The South African treasury proposed a 60 percent tax-free threshold on emissions for all sectors, including electricity, petroleum, iron, steel and aluminum. Oracle Environmental Accounting and Reporting (EA&R) supports these needs and guarantees consistency across organizations in how data is collected, retained, controlled, consolidated and used in calculating and reporting emissions inventory. EA&R also enables companies to develop an enterprise-wide data view that includes all 5 of the key sustainability categories: carbon emissions, energy, water, materials and waste. Thanks to its native integration with Oracle E-Business Suite and JD Edwards EnterpriseOne ERP Financials and Inventory Systems and the capability of capturing environmental data across business silos, Oracle Environmental Accounting and Reporting is uniquely positioned to support a strategic approach to carbon management that drives business value. Sources: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} African Utility Week BDlive Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • How to improve WinForms MSChart performance?

    - by Marcel
    Hi all, I have created some simple charts (of type FastLine) with MSChart and update them with live data, like below: . To do so, I bind an observable collection of a custom type to the chart like so: // set chart data source this._Chart.DataSource = value; //is of type ObservableCollection<SpectrumLevels> //define x and y value members for each series this._Chart.Series[0].XValueMember = "Index"; this._Chart.Series[1].XValueMember = "Index"; this._Chart.Series[0].YValueMembers = "Channel0Level"; this._Chart.Series[1].YValueMembers = "Channel1Level"; // bind data to chart this._Chart.DataBind(); //lasts 1.5 seconds for 8000 points per series At each refresh, the dataset completely changes, it is not a scrolling update! With a profiler I have found that the DataBind() call takes about 1.5 seconds. The other calls are negligible. How can I make this faster? Should I use another type than ObservableCollection? An array probably? Should I use another form of data binding? Is there some tweak for the MSChart that I may have missed? Should I use a sparsed set of date, having one value per pixel only? Have I simply reached the performance limit of MSCharts? From the type of the application to keep it "fluent", we should have multiple refreshes per second. Thanks for any hints!

    Read the article

  • We have multiple app servers running against a single database. How do I ensure that each row in a q

    - by Dave
    We have about 7 app servers running .NET windows services that ping a single sql server 2005 queue table and fetch a fixed amount of records to process at fixed intervals. The amount of records to process and the amount of time between fetches are both configurable and are initially set to 100 and 30 seconds initially. Currently, my queue table has an int status column which can be either "Ready, Processing, Complete, Error". The proc that fetches the records has a sql transaction with the following code inside the transaction: 1) Fetch x number of records into temp table where the status is "Ready". The select uses a holdlock hint 2) Update the status on those records in the Queue table to "Processing" The .NET services do some processing that may take seconds or even minutes per record. Another proc is called per record that simply updates the status to "Complete". The update proc has no transaction as I'm leaning on the implicit transaction as part of the update clause here. I don't know the traffic exceptions for this but figure it will be under 10k records per day. Is this the best way to handle this scenario? If so, are there any details that I've left out, such as a hint here or there? Thanks! Dave

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Figuring out the performance limitation of an ADC on a PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a microcontroller like PIC for an analog-to-digital application. This would be preferable to using external A/D chips. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! Here's the simplest example: PIC10F220 is the simplest possible PIC with an ADC. Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet? Any pointers would be much appreciated! AKE

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • Internet Explorer cannot 'fully' load ActiveX Control

    - by K Browne
    Context I am migrating an installer for an ActiveX control from Per-Machine to Per-User. I did this by programming the installer write to HKCU\Software\Classes instead of HKLM\Software\Classes. Problem On my machine (Windows 7 with UAC Enabled), the ActiveX control successfully loads. On the other windows 7 test machines (one with UAC enabled, one with UAC disabled), the control 'partially' loads. What is Partially? When a user visits a page with the ActiveX control, Internet Explorer displays a warning message in a yellow bar on the top of the window. If you click the 'Run add-on' button in the bar, the control becomes visible and begins to run, but Javascript code that tries to access properties of the control return the error: Library not registered. Differences between machines On the dev machine reads from HKCR\CLSID\<GUID> succeed while on the test machines these reads fail. Reads from HKCU succeed on both dev and test machines. Reads from HKLM fail on both test and dev machines. (I collected reads using Sysinternals Process Monitor) Strangely, the keys that Internet Explorer fails to read are clearly visible if I use regedit to view HKCR\CLSID\<GUID> on the test machines. Question What can I do to get the per-user control to load on the test machines? What could cause this difference between the dev machine and the test machines? Why can I see the key in HKCR with RegEdit but Internet Explorer cannot see the key? Any help is appreciated. Thank you.

    Read the article

  • MySQL - How do I inner join sorting the joined data

    - by Gary
    I'm trying to write a report which will join a person, their work, and their hourly wage at the time of work. I cannot seem to figure out the best way to join the person's cost when the date is less than the date of the work. Let's say a person cost $30 per hour at the start of the year then got a $10 raise o Feb 5 and another on Mar 1. 01/01/2010 $30.00 (per hour) 02/05/2010 $40.00 03/01/2010 $45.00 The person put in hours several days which span the rasies. 01/05/2010 10 hours (should be at $30/hr) 01/27/2010 5 hours (again at $30) 02/10/2010 10 hours (at $40/hr) 03/03/2010 5 hours (at $45/hr) I'm trying to write one SQL statement which will pull the hours, the cost per hour, and the hours*cost. The cost is the hourly rate last entered into the system so the cost date is less than the work date, ordered by cost date limit 1. SELECT person.id, person.name, work.hours, person_costs.value, work.hours * person_costs.value AS value FROM person INNER JOIN work ON (person.id = work.person_id) INNER JOIN person_costs ON (person.id = person_costs.person_id AND person_costs.date < work.date) WHERE person.id = 1234 ORDER BY work.date ASC The problem I'm having, the person_costs isn't ordered by date in descending order. It's pulling out "any" value (naturally sorted by record position) which matches the condition. How do I select the first person_cost value which is older than the work date? Thanks!

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Optimization in Python - do's, don'ts and rules of thumb.

    - by JV
    Well I was reading this post and then I came across a code which was: jokes=range(1000000) domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] I thought wouldn't it be better to calculate the value of len(jokes) once outside the list comprehension? Well I tried it and timed three codes jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]' 10000000 loops, best of 3: 0.0352 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0,l*2)]' 10000000 loops, best of 3: 0.0343 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes)*2;domain=[(0,l-i-1) for i in range(0,l)]' 10000000 loops, best of 3: 0.0333 usec per loop Observing the marginal difference 2.55% between the first and the second made me think - is the first list comprehension domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] optimized internally by python? or is 2.55% a big enough optimization (given that the len(jokes)=1000000)? If this is - What are the other implicit/internal optimizations in Python ? What are the developer's rules of thumb for optimization in Python? Edit1: Since most of the answers are "don't optimize, do it later if its slow" and I got some tips and links from Triptych and Ali A for the do's. I will change the question a bit and request for don'ts. Can we have some experiences from people who faced the 'slowness', what was the problem and how it was corrected? Edit2: For those who haven't here is an interesting read Edit3: Incorrect usage of timeit in question please see dF's answer for correct usage and hence timings for the three codes.

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • Concept of WNDCLASSEX, good programming habits and WndProc for system classes

    - by luiscubal
    I understand that the Windows API uses "classes", relying to the WNDCLASS/WNDCLASSEX structures. I have successfully gone through windows API Hello World applications and understand that this class is used by our own windows, but also by Windows core controls, such as "EDIT", "BUTTON", etc. I also understand that it is somehow related to WndProc(it allows me to define a function for it) Although I can find documentation about this class, I can't find anything explaining the concept. So far, the only thing I found about it was this: A Window Class has NOTHING to do with C++ classes. Which really doesn't help(it tells me what it isn't but doesn't tellme what it is). In fact, this only confuses me more, since I'd be tempted to associate WNDCLASSEX to C++ classes and think that "WNDCLASSEX" represents a control type . So, my first question is What is it? In second place, I understand that one can define a WndProc in a class. However, a window can also get messages from the child controls(or windows, or whatever they are called in the Windows API). How can this be? Finally, when is it a good programming practise to define a new class? Per application(for the main frame), per frame, one per control I define(if I create my own progress bar class, for example)? I know Java/Swing, C#/Windows.Form, C/GTK+ and C++/wxWidgets, so I'll probably understand comparisons with these toolkits.

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >