Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 112/134 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • My rhythm game runs choppy even with high frame rate

    - by felipedrl
    I'm coding a rhythm game and the game runs smoothly with uncapped fps. But when I try to cap it around 60 the game updates in little chunks, like hiccups, as if it was skipping frames or at a very low frame rate. The reason I need to cap frame rate is because in some computers I tested, the fps varies a lot (from ~80 - ~250 fps) and those drops are noticeable and degrade response time. Since this is a rhythm game this is very important. This issue is driving me crazy. I've spent a few weeks already on it and still can't figure out the problem. I hope someone more experienced than me could shed some light on it. I'll try to put here all the hints I've tried along with two pseudo codes for game loops I tried, so I apologize if this post gets too lengthy. 1st GameLoop: const uint UPDATE_SKIP = 1000 / 60; uint nextGameTick = SDL_GetTicks(); while(isNotDone) { // only false when a QUIT event is generated! if (processEvents()) { if (SDL_GetTicks() > nextGameTick) { update(UPDATE_SKIP); render(); nextGameTick += UPDATE_SKIP; } } } 2nd Game Loop: const uint UPDATE_SKIP = 1000 / 60; while (isNotDone) { LARGE_INTEGER startTime; QueryPerformanceCounter(&startTime); // process events will return false in case of a QUIT event processed if (processEvents()) { update(frameTime); render(); } LARGE_INTEGER endTime; do { QueryPerformanceCounter(&endTime); frameTime = static_cast<uint>((endTime.QuadPart - startTime.QuadPart) * 1000.0 / frequency.QuadPart); } while (frameTime < UPDATE_SKIP); } [1] At first I thought it was a timer resolution problem. I was using SDL_GetTicks, but even when I switched to QueryPerformanceCounter, supposedly less granular, I saw no difference. [2] Then I thought it could be due to a rounding error in my position computation and since game updates are smaller in high FPS that would be less noticeable. Indeed there is an small error, but from my tests I realized that it is not enough to produce the position jumps I'm getting. Also, another intriguing factor is that if I enable vsync I'll get smooth updates @60fps regardless frame cap code. So why not rely on vsync? Because some computers can force a disable on gfx card config. [3] I started printing the maximum and minimum frame time measured in 1sec span, in the hope that every a few frames one would take a long time but still not enough to drop my fps computation. It turns out that, with frame cap code I always get frame times in the range of [16, 18]ms, and still, the game "does not moves like jagger". [4] My process' priority is set to HIGH (Windows doesn't allow me to set REALTIME for some reason). As far as I know there is only one thread running along with the game (a sound callback, which I really don't have access to it). I'm using AudiereLib. I then disabled Audiere by removing it from the project and still got the issue. Maybe there are some others threads running and one of them is taking too long to come back right in between when I measured frame times, I don't know. Is there a way to know which threads are attached to my process? [5] There are some dynamic data being created during game run. But It is a little bit hard to remove it to test. Maybe I'll have to try harder this one. Well, as I told you I really don't know what to try next. Anything, I mean, anything would be of great help. What bugs me more is why at 60fps & vsync enabled I get an smooth update and at 60fps & no vsync I don't. Is there a way to implement software vsync? I mean, query display sync info? Thanks in advance. I appreciate the ones that got this far and yet again I apologize for the long post. Best Regards from a fellow coder.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • New Oracle BI Applications released

    - by THE
    Oracle has just released two new Applications for Oracle Business Intelligence Analytics with the Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} 7.9.6.x Extension Pack: Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Manufacturing Analytics, part of the Oracle BI Applications product family, helps discrete and process manufacturing organizations optimize their supply networks by integrating data from across the enterprise value chain, thereby enabling executives, operations managers, cost accountants and production supervisors to make informed and actionable decisions related to manufacturing execution. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Enterprise Asset Management Analytics, part of the Oracle BI Applications product family, offers complete and enhanced visibility to enterprise-wide maintenance information. Pre-built reports covering Maintenance History, Maintenance Cost Analysis and Maintenance Work Orders, provide Maintenance Managers information to maximize performance, identify potential issues much in advance, and address them before they escalate into serious problems.  More Information about the existing Business Intelligence Analytics Applications can be found on this page: http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html If you are not familiar with Oracle Manufacturing or Oracle Enterprise Asset Management, these PDFs might get you started: http://www.oracle.com/us/products/applications/060289.pdf http://www.oracle.com/us/products/applications/057127.pdf

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • Crystal Ball Live Webcast: Expert insight from EpiX Analytics

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Register today for the November 2nd live Crystal Ball webcast- Expert insight from EpiX Analytics: Techniques for Improved Risk Management and Decision-Making Join our speaker Dr Huybert Groenendaal, PhD, MSc, MBA, EpiX Analytics LLC and learn how to realize the full value of decision-making techniques, and: • Gain insight into risks and uncertainties • Account for risk in quantitative analysis and decision making • Generate a range of possible outcomes and the probabilities they will occur for any choice of action • Learn best practice for the use of Crystal Ball to support decision making in your own environment • Learn how to avoid common mistakes when using Monte Carlo simulations • Maximize your existing investment in spreadsheet technology Register now for this November 2nd live webcast and don't miss this opportunity to learn how you can model, predict and forecast with better results. For more information view the evite.

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • pylucene: install error

    - by Pradeep
    I am trying to install Pylucene (pylucene-3.3-3-src.tar.gz) on my ubuntu linux 11.10. I have python 2.7.2. I was able to compile JCC (I think) because I didnt see any error when I installed it. When I tried to install Pylucene I get the following error. Can someone help? Thanks. ICU not installed /usr/bin/python -m jcc --shared --jar lucene-java-3.3/lucene/build/lucene-core-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/analyzers/common/lucene-analyzers-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/memory/lucene-memory-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/highlighter/lucene-highlighter-3.3.jar --jar build/jar/extensions.jar --jar lucene-java-3.3/lucene/build/contrib/queries/lucene-queries-3.3.jar --jar lucene-java-3.3/lucene/build/contrib/grouping/lucene-grouping-3.3.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.util.HashMap java.util.HashSet java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.util.regex --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --exclude org.apache.lucene.analysis.cn.smart.AnalyzerProfile --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.3 --module python/collections.py --module python/ICUNormalizer2Filter.py --module python/ICUFoldingFilter.py --module python/ICUTransformFilter.py --files 3 --build /usr/bin/python: No module named jcc make: *** [compile] Error 1 Here is my Makefile configuration which I uncommented PREFIX_PYTHON=/usr ANT=ant PYTHON=$(PREFIX_PYTHON)/bin/python JCC=$(PYTHON) -m jcc --shared NUM_FILES=3

    Read the article

  • java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl while starting the w

    - by venkat
    Hi, As part of our application we are using apache's xerces jaxp parser. When we deploy the application on weblogic9.2, we are getting the following error. org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.cxf.wsdl.WSDLManager' defined in class path resource [META-INF/cxf/cxf.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.apache.cxf.wsdl11.WSDLManagerImpl]: Constructor threw exception; nested exception is java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl As per our analysis, i)The weblogic is trying to to load its own DocumentBuilderFactoryImpl which is present in weblogic.jar instead of apache's xerces. We tried the following to force the weblogic to load DocumentBuilderFactoryImpl from xerces i)we have added the following tag into weblogic.xml true ii)we have put latest versions of xalan in jre/lib/endorced folder. this didnt resolve our problem. ii) we have added entries in weblogic-application.xml webapp.encoding.default UTF-8 javax.jws. org.apache.xerces. org.apache.xerces.jaxp.* ii)Added the following entry in weblogic-application.xml <parser-factory> <saxparser-factory>org.apache.xerces.jaxp.SAXParserFactoryImpl</saxparser-factory> <document-builder-factory>org.apache.xerces.jaxp.DocumentBuilderFactoryImpl </document-builder-factory> org.apache.xalan.processor.TransformerFactoryImpl iii)Added jaxp.properties to load DocumentBuilderFactoryImpl from xerces to the jre/lib and started the server.In this case, the weblogic didnt start. iv)Then we started the server first and then copied the jaxp.properties file during the run time when server starts.But no success None of the above worked for us. Any help is highly appreciated. Thanks in advance, Venkat.

    Read the article

  • How far did DevExpress get with Javascript refactoring?

    - by MattX
    Over a year ago, I remember watching one of DevExpress evangelists previewing or at least promoting rich Javascript refactoring (beyond just limited intellisense) within the Visual Studio shell, I recall part of CodeRush/DevExpress product line. I was excited. On checking today (lmgtfy) I can find only very very limited reference to it, just one small italtic line about beta in product description, no videos, no blog posts, no community buzz. Was it dropped? Vapourware? Poor implementation that they dont even promote it? With Javascript arguably the most popular programming language ever and with a VM for it on practically every machine in last 10 years, why is editor support so poor? (Compared with those for Java and C#)? You see the likes of ScottGu bragging we now have jQuery intellisense but compare this to richness of C# support in the IDE it is a joke. Someone once said since there are many style of writing Javascript a rich IDE (beyond intellisense) with refactoring support is difficult, but if several engines can interpret/compile JS with same result surely it should be hard to analysis it to support stuff like rename variable, extract method, move to another namespace (or JS minic of it), etc.. Am I wrong?

    Read the article

  • How to implement progress bar and backgroundworker for database calls C#?

    - by go-goo-go
    How to implement progress bar and backgroundworker for database calls C#? I do have some methods that deal with large amounts of data. They do last a lot, so in my windows application, i wanna do something to tell users that the data are being processed. So i thought of using progress bar or status strip label, but since there is a single ui thread, the thread where the database-dealing methods are executed, ui controls are not updated, so progress bar or status strip label are useless to me. I've already seen some examples, but they deal with for-loops, ex: for(int i = 0; i < count; i++){ System.Threading.Thread.Sleep(70); // ... do analysis ... bgWorker.ReportProgress((100 * i) / count);} private void bgWorker_ProgressChanged(object sender, ProgressChangedEventArgs e){ progressBar.Value = Math.Min(e.ProgressPercentage, 100);} Can anybody give an example where I can use a method call, not a for-loop, and let the progress bar run while this method is executing??? thnx in advance, any help and hint is highly appreciated.

    Read the article

  • MP3 Decoding on Android

    - by Rob Szumlakowski
    Hi. We're implementing a program for Android phones that plays audio streamed from the internet. Here's approximately what we do: Download a custom encrypted format. Decrypt to get chunks of regular MP3 data. Decode MP3 data to raw PCM data in a memory buffer. Pipe the raw PCM data to an AudioTrack Our target devices so far are Droid and Nexus One. Everything works great on Nexus One, but the MP3 decode is too slow on Droid. The audio playback starts to skip if we put the Droid under load. We are not permitted to decode the MP3 data to SD card, but I know that's not our problem anyways. We didn't write our own MP3 decoder, but used MPADEC (http://sourceforge.net/projects/mpadec/). It's free and was easy to integrate with our program. We compile it with the NDK. After exhaustive analysis with various profiling tools, we're convinced that it's this decoder that is falling behind. Here's the options we're thinking about: Find another MP3 decoder that we can compile with the Android NDK. This MP3 decoder would have to be either optimized to run on mobile ARM devices or maybe use integer-only math or some other optimizations to increase performance. Since the built-in Android MediaPlayer service will take URLs, we might be able to implement a tiny HTTP server in our program and serve the MediaPlayer with the decrypted MP3s. That way we can take advantage of the built-in MP3 decoder. Get access to the built-in MP3 decoder through the NDK. I don't know if this is possible. Does anyone have any suggestions on what we can do to speed up our MP3 decoding? -- Rob Sz

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • Objective C - displaying data in NSTextView

    - by Leo
    Hi, I'm having difficulties displaying data in a TextView in iPhone programming. I'm analyzing incoming audio data (from the microphone). In order to do that, I create an object "analyzer" from my SignalAnalyzer class which performs analysis of the incoming data. What I would like to do is to display each new incoming data in a TextView in realtime. So when I push a button, I create the object "analyzer" whiwh analyze the incoming data. Each time there is new data, I need to display it on the screen in a TextView. My problem is that I'm getting an error because (I think) I'm trying to send a message to the parent class (the one taking care of displaying stuff in my TextView : it has a TexView instance variable linked in Interface Builder). What should I do in order to tell my parent class what it needs to display ? Or how sohould I design my classes to display automaticlally something ? Thank you for your help. PS : Here is my error : 2010-04-19 14:59:39.360 MyApp[1421:5003] void WebThreadLockFromAnyThread(), 0x14a890: Obtaining the web lock from a thread other than the main thread or the web thread. UIKit should not be called from a secondary thread. 2010-04-19 14:59:39.369 MyApp[1421:5003] bool _WebTryThreadLock(bool), 0x14a890: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Program received signal: “EXC_BAD_ACCESS”.

    Read the article

  • XtraGrid Suite - is there a way to add a button or hyperlink to a cell?

    - by calico-cat
    I'm working with the XtraGrid Suite made by DevExpress. I can't find any sort of functionality to do this, but I'm curious if you can add a button or hyperlink to a grid cell. Context: I've got an Events list. Each Event has a Time, Start/End, and a Category (Utility and Maintenance). There can be Start events and Stop events. Having done my analysis of the problem, I've decided that having a StartTime and EndTime for each event would not work. So if an event starts, I'd record the current time to the Event object, and set it as a 'Start' event. I'd like to add a "Stop" button/hyperlink to a cell in that row. If the user wishes to log an Ends event, the event type, etc would be copied to a new Event with the type 'Stop' and the button would disappear. I hope this makes sense. EDIT: Aaronaught's answer is actually better than what I was originally asking (a button) so I've updated the question. That way, anyone looking for putting a hyperlink in a cell can benefit from his example : )

    Read the article

  • RotatingFileHandler throws an exception when delay parameter is set

    - by Eli Courtwright
    When I run the following code under Python 2.6 import logging from logging.handlers import RotatingFileHandler rfh = RotatingFileHandler("testing.log", delay=True) logging.getLogger().addHandler(rfh) logging.warning("Boo!") then the last line throws AttributeError: RotatingFileHandler instance has no attribute 'level'. So I add the line rfh.setLevel(logging.DEBUG) before the call to addHandler, and then the last line throws AttributeError: RotatingFileHandler instance has no attribute 'filters'. So if I manually set filters to be an empty list, then it complains about not having the attribute lock, etc. When I remove the delay=True to leave it as the default value of False as documented here, the problem completely goes away. Am I missing something? How do I properly use the delay parameter of the RotatingFileHandler class? EDIT: Upon further analysis (presented in my own answer below), this looks like a bug, but I can't find a bug report on this in the Python bug tracker, even trying different search terms, so I guess I'll report it. However, if someone can locate the actual bug report, then I can avoid submitting a duplicate reporting and wasting the time of the Python developers. I'll hold off on reporting the bug for a few hours, and if someone posts an answer that has the current bug report, then I'll accept that answer for this question.

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • Eclipse RCP: Using different JUnit version (4.3.1) than shipped with eclipse galileo (4.5.0)

    - by Skrrytch
    Hallo, I have the following situation: We are developing an Eclipse RCP Application and want to switch from Eclipse 3.4 to Eclipse 3.5. Our JUnit-Tests are using JUnit 4.3.1 and we have a launch configuration to start our test suite. I think I don't need to go into more details here. The problem is: Running the tests with Eclipse 3.5 does not work: JUnit cannot find any annotations in the test classes (neither (at)Test nor (at)RunWith). I patched the junit library with some logging output to check what is going on. I found out that this problem is a classloading issue: The test class passed to JUnit 'lies in' a ClassLoader which is different from the one JUnit uses to load the annotation classes like 'RunWith'. This is not the case in Eclipse 3.4 in org.junit.internal.requests.ClassRequest: public Runner getRunner() { log("TestClass ClassLoader: "+this.fTestClass.getClassLoader()); log("RunWith.class ClassLoader: "+RunWith.class.getClassLoader()); ... // validating test class: searching for annotations and more } The first line prints another classloader than the second line. This is bad because JUnit cannot match the annotations in the test class with the Annotation-Class (here: RunWith.class): "RunWith" in CL1 is not equal to "RunWith" in CL2. I have a solution which points to the core problem: Replace JUnit 4.5 in Eclipse Galileo with JUnit 4.3.1 so that there is only one JUnit-Version: The Test-Run and the tests classes are both using JUnit 4.3.1 (I had to patch "org.eclipse.jdt.junit4.runtime" to accept an ealier junit version). I think I can also replace JUnit 4.3.1 in my test class with Version 4.5, but that is not an option yet. Guess: The classloaders are different because the classes 'come from' different JUnit-Bundles: the testclass with its annotations from version 4.3.1 and the test runs in version 4.5 What I want to know: Is there any other solution besides patching Eclipse (replace JUnit versions)? Any commandline argument or such? Any configuration to force Eclipse to Use JUnit 4.3.1? Any hints on the above described analysis are welcome!

    Read the article

  • How to avoid XCode framework weak-linking problems?

    - by Frank R.
    Hi, I'm building an application that takes advantage of Mac OS X 10.6-only technologies, but without giving up backwards compatibility to 10.5 Leopard. The way I do this is by setting the 10.6 SDK as the base SDK, weak-linking all frameworks and setting the deployment target to 10.5 as described in: http://developer.apple.com/mac/library/DOCUMENTATION/MacOSX/Conceptual/BPFrameworks/Concepts/WeakLinking.html This works fine; before making a call that is Snow Leopard-only I need to check that the selector or indeed the class actually exist. Or I can just check the OS version before making the call. The problem is that this is incredibly fragile. If I make a single call that is 10.6 only I blow Leopard-compatibility. So using even the normal code code completion feature can be dangerous. My question: is there any way of checking which calls are not defined on 10.5 before doing a release build? Some kind of static analysis, or even just a trick (a target set the other SDK?) would do. I obviously should test on a Leopard machine before releasing anything, but even so I can't possibly go through all paths of the program before every release. Any advice would be appreciated. Best regards, Frank

    Read the article

  • Check for valid IMEI

    - by Tim
    Hi, does somebody knows how to check for a valid IMEI? I have found a function to check on this page: http://www.dotnetfunda.com/articles/article597-imeivalidator-in-vbnet-.aspx But it returns false for valid IMEI's (f.e. 352972024585360). I can validate them online on this page: http://www.numberingplans.com/?page=analysis&sub=imeinr What is the correct way(in VB.Net) to check if a given IMEI is valid? Regards, Tim PS: This function from above page must be incorrect in some way: Public Shared Function isImeiValid(ByVal IMEI As String) As Boolean Dim cnt As Integer = 0 Dim nw As String = String.Empty Try For Each c As Char In IMEI cnt += 1 If cnt Mod 2 <> 0 Then nw += c Else Dim d As Integer = Integer.Parse(c) * 2 ' Every Second Digit has to be Doubled nw += d.ToString() ' Genegrated a new number with doubled digits End If Next Dim tot As Integer = 0 For Each ch As Char In nw.Remove(nw.Length - 1, 1) tot += Integer.Parse(ch) ' Adding all digits together Next Dim chDigit As Integer = 10 - (tot Mod 10) ' Finding the Check Digit my Finding the Remainder of the sum and subtracting it from 10 If chDigit = Integer.Parse(IMEI(IMEI.Length - 1)) Then ' Checking the Check Digit with the last digit of the Given IMEI code Return True Else Return False End If Catch ex As Exception Return False End Try End Function

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >