Search Results

Search found 1074 results on 43 pages for 'late'.

Page 38/43 | < Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • Thinking differently about BI delivery

    - by jamiet
    My day job involves implementing Business Intelligence (BI) solutions which, as I have said before, is simply about giving people the information they need to do their jobs. I’m always interested in learning about new ways of achieving that aim and that is my motivation for writing blog entries that are not concerned with SQL or SQL Server per se. Implementing BI systems usually involves hacking together a bunch third party products with some in-house “glue” and delivering information using some shiny, expensive web-based front-end tool; the list of vendors that supply such tools is big and ever-growing. No doubt these tools have their place and of late I have started to wonder whether they can be supplemented with different ways of delivering information. The problem I have with these separate web-based tools is exactly that – they are separate web-based tools. What’s the problem with that you might ask? I’ll explain! They force the information worker to go somewhere unfamiliar in order to get the information they need to do their jobs. Would it not be better if we could deliver information into the tools that those information workers are already using and not force them to go somewhere else? I look at the rise of blogging over recent years and I realise that what made them popular is that people can subscribe to RSS feeds and have information pushed to them in their tool of choice rather than them having to go and find the information for themselves in a tool that has been foisted upon them. Would it not be a good idea to adopt the principle of subscription for the benefit of delivering BI information as well? I think it would and in the rest of this blog entry I’ll outline such a scenario where the power of subscription could be used to enhance the delivery of information to information workers. Typical questions that information workers ask might be: What are my year-on-year sales figures? What was my footfall yesterday? How many widgets have I sold so far today? Each of those questions includes a time element and that shouldn’t surprise us, any BI system that I have worked on includes the dimension of time. Now, what do people use to view and organise their time-oriented information? Its not a trick question, they use a calendar and in the enterprise space more often than not that calendar is managed using Outlook. Given then that information workers are already looking at their calendar in Outlook anyway would it not make sense then to deliver information into that same calendar? Of course it would. Calendars are a great way of visualising information such as sales figures. Observe: Just in this single screenshot I have managed to convey a multitude of information. The information worker can see, at a glance, information about hourly/daily/weekly/monthly sales and, moreover, he/she is viewing that information right inside the tool that they use every day. There is no effort on the part of him/her, the information just appears hour after hour, day after day. Taking the idea further, each one of those calendar items could be a mini-dashboard in its own right. Double-clicking on an item could show a plethora of other information about that time slot such as breaking the sales down per region or year-over-year comparisons. Perhaps the title could employ a sparkline? Loads of possibilities. The point is that calendars are a completely natural way to visualise information; we should make more use of them! The real beauty of delivering information using calendars for us BI developers is that it should be so easy. In the case of Outlook we don’t need to write complicated VBA code that can go and manipulate a person’s calendar, simply publishing data in a format that Outlook can understand is sufficient and happily such formats already exist; iCalendar is the accepted format and the even more flexible xCalendar is hopefully on its way as well.   I’d like to make one last point and this one is with my SQL Server hat on. Reporting Services 2008 R2 introduced the ability to publish data as subscribable Atom feeds so it seems logical that it could also be a vehicle for delivering calendar feeds too. If you think this would be a good idea go and vote for it at Publish data as iCalendar feeds and please please please add some comments (especially if you vote it down). Work smarter, not harder! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • How do I dig myself out of this DEEP hole? [closed]

    - by user74847
    I may be a bit bias in the way i word this but any opinions and suggestions are welcome. I should start by saying i have a MSc in CS and a degree in new media +6 years expereince and im probably around a middleweight developer. I started a web development company with my friend from uni a year ago, there was a 4 month gap in the middle where i went miles away work on a big project. Ive since returned and picked up where we left off. A year on though i find im still staying up til 5am and getting up at 9 sometimes 2-3 days without sleep. While i was away i was working 9-5 and struggling to keep up with doing stuff for my clients 8 hours ahead, after work, so things stagnated. We currently have about 12 active projects, with one other part time developer and a full time freelancer who is dealing with one of our major projects. I am solely responsible for concurrently developing 2 big sites similar to gumtree in functionality, at the same time as about 5-6+ small WordPress based 5-10page sites. a lot of the content isnt in yet or the client is delaying so i chop and change project every other day which does my head in. Is it reasonable to expect myself to remember the intricate details of each project when i come back to it a week later? and remember the details of a task which hasnt been written down? my business partner seems to think so. or am i just forgetful? Im particularly bad at estimating timescales which doesnt help, added to that a lot of the technologies im am using are new to me (a magento site took weeks to theme rather than days and was full of bugs, even after 1000's of google searches and hours reading forums) im still trying to learn and find the best CMS for us to use and getting my head around the likes of Bootstrap and jquery, Cpanel / Linux (we just got a blank vps for me to set up with no experience) even installing an SSL certificate caused everyone's mail clients to go down which was more stress for me to sort out. I find the pressure of the workload and timescales and trying to learn this stuff so fast is beginning to turn me against my career path. The fact that i never seem to get anything done really winds up my business partner and iv come to associate him with the stress and pain of the whole situation especially when I get berated or a look that says "oh you retard" when I forget something. Even today i spent hours learning how a particular themeforest theme worked with wordpress and how i could twist it to work for our partiuclar needs, on the surface had done no work, that triggered a 30 minute tirade of anger and stress and questioning what i had done from my business partner. had i taken too long to work on that? shoudl i have done it in 2 hours instead of 6? i told him i would take 2 hours. i was wrong. I feel like im running myself into the ground. My sleeping pattern has got so bad that when im working im half asleep and making mistakes, my eyes are constantly purple underneath, i literally fall asleep at my desk, its affecting my social life too, ive not slept more than lightly for the last year and grind through impossible code puzzles in my half sleep wich keeps me awake, when im already exhausted. plus the work is rushed and buggy when it does get done so drags on into the next project. I also procrastinate quite badly, pacing the livingroom, looking out the window when Im alone for three days straight in the flat and start to get cabin fever which means i do even less work and the negative feedback loop continues. I get told im the only one with the problem when i say that i cant work from home any more, and examples of other freelancers get brought up. an office wouldnt bring any extra cash in to the company but im convinced having that moving more than 2 meters away from my bed to go to "work" would get me working, at the moment i feel guilty like i should be working 24-7. It is important that we do all this work to raise enough cash to get our business to the next level but every month still feels like a struggle to pay the rent (there is about £20K coming in by Jan) and i have to borrow money from friends often to buy food or get a taxi to a meeting, so it is vital the money keeps coming in. (im also 20 mins late for nearly all meetings but thats a different issue) have you experienced anything similar? how can i deal with the issues ive raised? is it realistic to develop 10 sites at once? how can i improve my relationship with my business partner? do you struggle to work at home? how do you deal with that? i think if i dont get my life on track by feb i will seriously consider giving it all up, but that seems like such a waste. any ideas!!? i need help! Thanks.

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • The application called an interface that was marshalled for a different thread

    - by X-Ray
    i'm writing a delphi app that communicates with excel. one thing i noticed is that if i call the Save method on the Excel workbook object, it can appear to hang because excel has a dialog box open for the user. i'm using the late binding. i'd like for my app to be able to notice when Save takes several seconds and then take some kind of action like show a dialog box telling this is what's happening. i figured this'd be fairly easy. all i'd need to do is create a thread that calls Save and have that thread call Excel's Save routine. if it takes too long, i can take some action. procedure TOfficeConnect.Save; var Thread:TOfficeHangThread; begin // spin off as thread so we can control timeout Thread:=TOfficeSaveThread.Create(m_vExcelWorkbook); if WaitForSingleObject(Thread.Handle, 5 {s} * 1000 {ms/s})=WAIT_TIMEOUT then begin Thread.FreeOnTerminate:=true; raise Exception.Create(_('The Office spreadsheet program seems to be busy.')); end; Thread.Free; end; TOfficeSaveThread = class(TThread) private { Private declarations } m_vExcelWorkbook:variant; protected procedure Execute; override; procedure DoSave; public constructor Create(vExcelWorkbook:variant); end; { TOfficeSaveThread } constructor TOfficeSaveThread.Create(vExcelWorkbook:variant); begin inherited Create(true); m_vExcelWorkbook:=vExcelWorkbook; Resume; end; procedure TOfficeSaveThread.Execute; begin m_vExcelWorkbook.Save; end; i understand this problem happens because the OLE object was created from another thread (absolutely). how can i get around this problem? most likely i'll need to "re-marshall" for this call somehow... any ideas? thank you!

    Read the article

  • Usage of IcmpSendEcho2 with an asynchronous callback

    - by Ben Voigt
    I've been reading the MSDN documentation for IcmpSendEcho2 and it raises more questions than it answers. I'm familiar with asynchronous callbacks from other Win32 APIs such as ReadFileEx... I provide a buffer which I guarantee will be reserved for the driver's use until the operation completes with any result other than IO_PENDING, I get my callback in case of either success or failure (and call GetCompletionStatus to find out which). Timeouts are my responsibility and I can call CancelIo to abort processing, but the buffer is still reserved until the driver cancels the operation and calls my completion routine with a status of CANCELLED. And there's an OVERLAPPED structure which uniquely identifies the request through all of this. IcmpSendEcho2 doesn't use an OVERLAPPED context structure for asynchronous requests. And the documentation is unclear excessively minimalist about what happens if the ping times out or fails (failure would be lack of a network connection, a missing ARP entry for local peers, ICMP destination unreachable response from an intervening router for remote peers, etc). Does anyone know whether the callback occurs on timeout and/or failure? And especially, if no response comes, can I reuse the buffer for another call to IcmpSendEcho2 or is it forever reserved in case a reply comes in late? I'm wanting to use this function from a Win32 service, which means I have to get the error-handling cases right and I can't just leak buffers (or if the API does leak buffers, I have to use a helper process so I have a way to abandon requests). There's also an ugly incompatibility in the way the callback is made. It looks like the first parameter is consistent between the two signatures, so I should be able to use the newer PIO_APC_ROUTINE as long as I only use the second parameter if an OS version check returns Vista or newer? Although MSDN says "don't do a Windows version check", it seems like I need to, because the set of versions with the new argument aren't the same as the set of versions where the function exists in iphlpapi.dll. Pointers to additional documentation or working code which uses this function and an APC would be much appreciated. Please also let me know if this is completely the wrong approach -- i.e. if either using raw sockets or some combination of IcmpCreateFile+WriteFileEx+ReadFileEx would be more robust.

    Read the article

  • To connect Gstreamer with Qt in order to play a gstreamer video in the Qt Widget

    - by raggio
    I tried using phonon to play the video but could not succeed. Off-late came to know through the Qt forums that even the latest version of Qt does not support phonon. Thats when i started using Gstreamer.Any suggestions as to how to connect the Gstreamer window with the Qt widget?My aim is to play a video using Gstreamer on the Qt widget.So how do i link the gstreamer window and the Qt widget? I am successful in getting the Id of the widget through winid(). Further with the help of Gregory Pakosz, I have added the below 2 lines of code in my application - QApplication::syncX(); gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId()); However am not able to link the Qt widget with the gstreamer video window. This is what my sample code would look like :- int main(int argc, char *argv[]) { printf("winid=%d\n", w.winId()); gst_init (NULL,NULL); /* create a new bin to hold the elements */ bin = gst_pipeline_new ("pipeline"); /* create a disk reader */ filesrc = gst_element_factory_make ("filesrc", "disk_source"); g_assert (filesrc); g_object_set (G_OBJECT (filesrc), "location", "PATH_TO_THE_EXECUTABLE", NULL); demux = gst_element_factory_make ("mpegtsdemux", "demuxer"); if (!demux) { g_print ("could not find plugin \"mpegtsmux\""); return -1; } vdecoder = gst_element_factory_make ("mpeg2dec", "decode"); if (!vdecoder) { g_print ("could not find plugin \"mpeg2dec\""); return -1; } videosink = gst_element_factory_make ("xvimagesink", "play_video"); g_assert (videosink); /* add objects to the main pipeline */ gst_bin_add_many (GST_BIN (bin), filesrc, demux, vdecoder, videosink, NULL); /* link the elements */ gst_element_link_many (filesrc, demux, vdecoder, videosink, NULL); gst_element_set_state(videosink, GST_STATE_READY); QApplication::syncX(); gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId()); /* start playing */ gst_element_set_state (bin, GST_STATE_PLAYING); } Could you explain more in detail about the usage of gst_x_overlay_set_xwindow_id() wrt my context? Could i get any hint as to how i can integrate gstreamer under Qt? Please help me solve this problem

    Read the article

  • WPF ListBox not binding to INotifyCollectionChanged or INotifyPropertyChanged Events

    - by Gabe Anzelini
    I have the following test code: private class SomeItem{ public string Title{ get{ return "something"; } } public bool Completed { get { return false; } set { } } } private class SomeCollection : IEnumerable<SomeItem>, INotifyCollectionChanged { private IList<SomeItem> _items = new List<SomeItem>(); public void Add(SomeItem item) { _items.Add(item); CollectionChanged(this, new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset)); } #region IEnumerable<SomeItem> Members public IEnumerator<SomeItem> GetEnumerator() { return _items.GetEnumerator(); } #endregion #region IEnumerable Members System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return _items.GetEnumerator(); } #endregion #region INotifyCollectionChanged Members public event NotifyCollectionChangedEventHandler CollectionChanged; #endregion } private SomeCollection collection = new SomeCollection(); private void Expander_Expanded(object sender, RoutedEventArgs e) { var expander = (Expander) sender; var list = expander.DataContext as ITaskList; var listBox = (ListBox)expander.Content; //list.Tasks.CollectionChanged += CollectionChanged; collection.Add(new SomeItem()); collection.Add(new SomeItem()); listBox.ItemsSource = collection; } and the XAML the outer listbox gets populated on load. when the expander gets expanded i then set the itemssource property of the inner listbox (the reason i do this hear instead of using binding is this operation is quite slow and i only want it to take place if the use chooses to view the items). The inner listbox renders fine, but it doesn't actually subscribe to the CollectionChanged event on the collection. I have tried this with ICollection instead of IEnumerable and adding INotifyPropertyChanged as well as replacing INotifyCollectionChanged with INotifyPropertyChanged. The only way I can actually get this to work is to gut my SomeCollection class and inherit from ObservableCollection. My reasoning for trying to role my own INotifyCollectionChanged instead of using ObservableCollection is because I am wrapping a COM collection in the real code. That collection will notify on add/change/remove and I am trying to convert these to INotify events for WPF. Hope this is clear enough (its late).

    Read the article

  • How do I create a partial function with generics in scala?

    - by Matteo Caprari
    Hello. I'm trying to write a performance measurements library for Scala. My idea is to transparently 'mark' sections so that the execution time can be collected. Unfortunately I wasn't able to bend the compiler to my will. An admittedly contrived example of what I have in mind: // generate a timing function val myTimer = mkTimer('myTimer) // see how the timing function returns the right type depending on the // type of the function it is passed to it val act = actor { loop { receive { case 'Int => val calc = myTimer { (1 to 100000).sum } val result = calc + 10 // calc must be Int self reply (result) case 'String => val calc = myTimer { (1 to 100000).mkString } val result = calc + " String" // calc must be String self reply (result) } Now, this is the farthest I got: trait Timing { def time[T <: Any](name: Symbol)(op: => T) :T = { val start = System.nanoTime val result = op val elapsed = System.nanoTime - start println(name + ": " + elapsed) result } def mkTimer[T <: Any](name: Symbol) : (() => T) => () => T = { type c = () => T time(name)(_ : c) } } Using the time function directly works and the compiler correctly uses the return type of the anonymous function to type the 'time' function: val bigString = time('timerBigString) { (1 to 100000).mkString("-") } println (bigString) Great as it seems, this pattern has a number of shortcomings: forces the user to reuse the same symbol at each invocation makes it more difficult to do more advanced stuff like predefined project-level timers does not allow the library to initialize once a data structure for 'timerBigString So here it comes mkTimer, that would allow me to partially apply the time function and reuse it. I use mkTimer like this: val myTimer = mkTimer('aTimer) val myString= myTimer { (1 to 100000).mkString("-") } println (myString) But I get a compiler error: error: type mismatch; found : String required: () => Nothing (1 to 100000).mkString("-") I get the same error if I inline the currying: val timerBigString = time('timerBigString) _ val bigString = timerBigString { (1 to 100000).mkString("-") } println (bigString) This works if I do val timerBigString = time('timerBigString) (_: String), but this is not what I want. I'd like to defer typing of the partially applied function until application. I conclude that the compiler is deciding the return type of the partial function when I first create it, chosing "Nothing" because it can't make a better informed choice. So I guess what I'm looking for is a sort of late-binding of the partially applied function. Is there any way to do this? Or maybe is there a completely different path I could follow? Well, thanks for reading this far -teo

    Read the article

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • How do I reward my developers for the little things they get right?

    - by Nat
    I am in a tech lead role and my developers get stuff right most of the time. How do I communicate to them thier value to me? (I.e. they have value because I do not have to go through and point out mistakes which means I do not have to watch them like a hawk which frees me to do more useful things). In summary For doing the mundane well on a day to day basis, it is good to recognise the developers effort verbally to them. An honest thankyou that mentions the specific behaviour and its positive repercussions to you personally will be well received, adjust the language to suite each individual. (Note that other developers within earshot may also respond to this by increasing their efforts in this specific activity.) Other things that should be done regularly are: Team drinks In many cultures this is an entirely worthy way of giving the team some time to socialise and relax. Be sure that you do not exclude people who do not drink or are not keen on pub culture. Shared meals are another option. Formal written (email) acknowledgment and praise to senior managers of the teams efforts and successes. (Note that acknowledging individuals alone may damage team spirit) Work the hours you expect your team to do. If they absolutely must work late for a deadline, be there in support Go to bat for the team. Refuse to let them be forced to work long periods of overtime without compensation. Protect them from level politics and stress. Give your team the best equipment you can afford. Good tools show respect and improve productivity. Small or large team rewards where appropriate can consist of many interesting activities/ items. If it allows the team to get together in a fun and even lightly competitive manner it will work (foosball table, go-karting, darts board, video game console etc). Don’t forget to listen to what the team wants, each team will have different ideas. Ensure they are getting a fair deal financially from the company. While different people may have different expectations of their pay, someone being paid unfairly will rot morale for the entire team

    Read the article

  • Going from a math career to a cs career: how to do it?

    - by Joseph
    Hey, I'm looking for some advice on how to successfully make the transition from mathematics to CS. My academic background is in mathematics (BS and MSc), and I've taken loads of math courses as well. You name it, and I took it: Measure Theory, Algebra, PDES, Manifolds, Complex Analysis, etc. I progressed quite far along this track, and at one point, I thought I would be a professional mathematician...But around the time I was finishing my MSc, I really got sick of it. Studying very abstract mathematics was fun, but it really lost it's appeal to me. Outside of a couple hundred people, I'm not sure if anybody would understand my research. I did not want to be 60 years old and say that my only contribution to the world consisted of published papers. Anyways, I've been an off and on hobbyist programmer since 2002. I've programmed in C and Java (just small projects), and I really started to be drawn to the area as time passed. There's a real appeal to CS work because, well, it actually means something to other people out there! I enjoy all parts of it: designing webpages (a real artistic appeal). On the other end, I do enjoy toying with compilers and more nitty-gritty stuff as well. Suffice to say, I have broad interests out there. Anyways, I know it's a bit late, but I was wondering if there were other folks out there who made the change, and if so, how I could do so. I know I have some fairly big gaps to fill in terms of data structures, lack of internship experience, etc. But I really would like to make this work. So my question is simply: How can I make the switch from math to CS? To pay the bills, I'll be doing financial analysis for a company, but I'd like to eventually transition into a developer type position. I've been reading "Algorithm Design" by Tardos and doing all the problems. It's not hard to make progress since the problems are far more concrete than the stuff I've been doing the past six years. I feel I can make fairly rapid progress in picking up all the materials from data structures, etc. but none of it can substitute the past several years I've lost. Anyways, I'm eager to learn but would love some advice/concrete direction. Thanks, Joseph

    Read the article

  • Speed Problem with Wireless Connectivity on Cisco 877w

    - by Carl Crawley
    Having a bit of a weird one with my local LAN setup. I recently installed a Cisco 877W router on my DSL2+ connection and all is working really well.. Upgraded the IOS to 12.4 and my wired clients are streaming connectivity superfast at 1.3mb/s. However, there seems to be an issue with my wireless clients - I can't seem to stream any data across the local wireless connection (LAN) and using the Internet, whilst responsive enough isn't really comparable with the wired connection speed. For example, all devices are connected to an 8 Port Gb switch on FE0 from the Router with a NAS disk and on my wired clients, I can transfer/stream etc absolutely fine - however, transferring a local 700Mb file on my local LAN estimates 7-8 hours to transfer :( The Wireless config is as follows : interface Dot11Radio0 description WIRELESS INTERFACE no ip address ! encryption mode ciphers tkip ! ssid [MySSID] ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 channel 2462 station-role root rts threshold 2312 world-mode dot11d country GB indoor bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding All devices are connected to the Gb Switch which is connected to FE0 with the following: Hardware is Fast Ethernet, address is 0021.a03e.6519 (bia 0021.a03e.6519) Description: Uplink to Switch MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 14000 bits/sec, 19 packets/sec 5 minute output rate 167000 bits/sec, 23 packets/sec 177365 packets input, 52089562 bytes, 0 no buffer Received 919 broadcasts, 0 runts, 0 giants, 0 throttles 260 input errors, 260 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 156673 packets output, 106218222 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Not sure why I'm having problems on the wireless and I've reached the end of my Cisco knowledge... Thanks for any pointers! Carl

    Read the article

  • Another IKImageView Question: copying a region

    - by Brian Postow
    I'm trying to use the select and copy feature of the IKImageView. If all you want to do is have an app with an image, select a portion and copy it to the clipboard, it's easy. You set the copy menu pick to the first responder's copy:(id) method and magically everything works. However, if you want something more complicated, like you want to copy as part of some other operation, I can't seem to find the method to do this. IKImageView doesn't seem to have a copy method, it doesn't seem to have a method that will even tell you the selected rectangle! I have gone through Hillegass' book, so I understand how the clipboard works, just not how to get the portion of the image out of the view... Now, I'm starting to think that I made a mistake in basing my project on IKImageView, but it's what Preview is built on (or so I've read), so I figured it had to be stable... and anyway, now it's too late, I'm too deep in this to start over... So, other than not using IKImageView, any suggestions on how to copy the select region to the clipboard manually? EDIT actually, I have found the copy(id) method, but when I call it, I get <Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 16 bits/pixel; 1-component color space; kCGImageAlphaPremultipliedLast; 2624 bytes/row. Which obviously doesn't happen when I do a normal copy through the first-responder... I understand the error message, but I'm not sure where it's getting those parameters from... Is there any way to trace through this and see how this is happening? A debugger won't help for obvious reasons, as well as the fact that I'm doing this in Mozilla, so a debugger isn't an option anyway... EDIT 2 It occurs to me that the copy:(id) method I found may be copying the VIEW rather than copying a chunk of the image to the clipboard, which is what I need. The reason I thought it was the clipboard copy is that in another project, where I'm copying from an IKImageView to the clipboard straight from the edit menu, it just sends a copy:(id) to the firstResponder, but I'm not actually sure what the firstresponder does with it...

    Read the article

  • Partial overriding in Java (or dynamic overriding while overloading)

    - by Lie Ryan
    If I have a parent-child that defines some method .foo() like this: class Parent { public void foo(Parent arg) { System.out.println("foo in Function"); } } class Child extends Parent { public void foo(Child arg) { System.out.println("foo in ChildFunction"); } } When I called them like this: Child f = new Child(); Parent g = f; f.foo(new Parent()); f.foo(new Child()); g.foo(new Parent()); g.foo(new Child()); the output is: foo in Parent foo in Child foo in Parent foo in Parent But, I want this output: foo in Parent foo in Child foo in Parent foo in Child I have a Child class that extends Parent class. In the Child class, I want to "partially override" the Parent's foo(), that is, if the argument arg's type is Child then Child's foo() is called instead of Parent's foo(). That works Ok when I called f.foo(...) as a Child; but if I refer to it from its Parent alias like in g.foo(...) then the Parent's foo(..) get called irrespective of the type of arg. As I understand it, what I'm expecting doesn't happen because method overloading in Java is early binding (i.e. resolved statically at compile time) while method overriding is late binding (i.e. resolved dynamically at compile time) and since I defined a function with a technically different argument type, I'm technically overloading the Parent's class definition with a distinct definition, not overriding it. But what I want to do is conceptually "partially overriding" when .foo()'s argument is a subclass of the parent's foo()'s argument. I know I can define a bucket override foo(Parent arg) in Child that checks whether arg's actual type is Parent or Child and pass it properly, but if I have twenty Child, that would be lots of duplication of type-unsafe code. In my actual code, Parent is an abstract class named "Function" that simply throws NotImplementedException(). The children includes "Polynomial", "Logarithmic", etc and .foo() includes things like Child.add(Child), Child.intersectionsWith(Child), etc. Not all combination of Child.foo(OtherChild) are solvable and in fact not even all Child.foo(Child) is solvable. So I'm best left with defining everything undefined (i.e. throwing NotImplementedException) then defines only those that can be defined. So the question is: Is there any way to override only part the parent's foo()? Or is there a better way to do what I want to do?

    Read the article

  • Asynchronous Silverlight 4 call to the World of Warcraft armoury streaming XML in C#

    - by user348446
    Hello - I have been stuck on this all weekend and failed miserably! Please help me to claw back my sanity!! Your challenge For my first Silverlight application I thought it would be fun to use the World of Warcraft armoury to list the characters in my guild. This involves making an asyncronous from Silverlight (duh!) to the WoW armoury which is XML based. SIMPLE EH? Take a look at this link and open the source. You'll see what I mean: http://eu.wowarmory.com/guild-info.xml?r=Eonar&n=Gifted and Talented Below is code for getting the XML (the call to ShowGuildies will cope with the returned XML - I have tested this locally and I know it works). I have not managed to get the expected returned XML at all. Notes: If the browser is capable of transforming the XML it will do so, otherwise HTML will be provided. I think it examines the UserAgent I am a seasoned asp.net web developer C# so go easy if you start talking about native to Windows Forms / WPF I can't seem to set the UserAgent setting in .net 4.0 - doesn't seem to be a property off the HttpWebRequest object for some reason - i think it used to be available. Silverlight 4.0 (created as 3.0 originally before I updated my installation of Silverlight to 4.0) Created using C# 4.0 Please explain as if you talking to a web developer and not a proper programming lol! Below is the code - it should return the XML from the wow armoury. private void button7_Click(object sender, RoutedEventArgs e) { // URL for armoury lookup string url = @"http://eu.wowarmory.com/guild-info.xml?r=Eonar&n=Gifted and Talented"; // Create the web request HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(url); // Set the user agent so we are returned XML and not HTML //httpWebRequest.Headers["User-Agent"] = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)"; // Not sure about this dispatcher thing - it's late so i have started to guess. Dispatcher.BeginInvoke(delegate() { // Call asyncronously IAsyncResult asyncResult = httpWebRequest.BeginGetResponse(ReqCallback, httpWebRequest); // End the response and use the result using (HttpWebResponse httpWebResponse = (HttpWebResponse)httpWebRequest.EndGetResponse(asyncResult)) { // Load an XML document from a stream XDocument x = XDocument.Load(httpWebResponse.GetResponseStream()); // Basic function that will use LINQ to XML to get the list of characters. ShowGuildies(x); } }); } private void ReqCallback(IAsyncResult asynchronousResult) { // Not sure what to do here - maybe update the interface? } Really hope someone out there can help me! Thanks mucho! Dan. PS Yes, I have noticed the irony in the name of the guild :)

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Visual Studio soft-crashing when encountering XAML Errors in initialize.

    - by Aren
    I've been having some serious issues with Visual Studio 2010 as of late. It's been crashing in a peculiar way when I encounter certain types of XAML errors during the InitializeComponent() of a control/window. The program breaks and visual studio gears up like it's catching an exception (because it is) and then stops midway displaying a broken highlight in my XAML file with no details as to what is wrong. Example: There is not pop outs, or details Anywhere about what is wrong, only a callstack that points to my InitializeComponent() call. Now normally I'd just do some trial and error to fix this problem, and find out where i messed up, but the real problem isn't my code. Visual Studio is rendered completely useless at this point. It reports my application still in "Running" mode. The Stop/Break/Restart buttons on the toolbar or in the menus don't do anything (but grey out). Closing the application does not stop this behaviour, closing visual studio gets it stuck in a massive loop where it yells at me complaining every file open is not in the debug project, then repeats this process when i have exausted every open file. I have to force-close devenv.exe, and after this happening 3-4 times in a row it's a lot of wasted time (as my projects are usually pretty big and studio can be quite slow @ loading). To the point Has anyone else experienced this? How can I stop studio from locking up. Can I at LEAST get information out of this beast another way so i can fix my XAML error sooner rather than after 3-4 trial-and-error compiles yielding the same crash? Any & All help would be appreciated. Visual Studio 2010 version: 10.0.30319.1RTM Edit & Update FWIW, mostly the errors that cause this are XamlParseExceptions (I figured this out after i found what was wrong with my XAML). I think I need to be clearer though, Im not looking for the solution to my code problem, as these are usually typos / small things, I'm looking for a solution to VStudio getting all buggered up as a result. The particular error in the above image that 100% for sure caused this was a XamlParseException caused by forgetting a Value attribute on a data trigger. I've fixed that part but it still doesn't tell my why my studio becomes a lump of neutered program when a perfectly normal exception is thrown in the parsing of the XAML. Code that will cause this issue (at least for me) This is the base template WPF Application, with the following Window.xaml code. The problem is a missing Value="True" on the <DataTrigger ...> in the template. It generates a XamlParseException and Visual Studio Crashes as described above when debugging it. Final Notes The following solutions did not help me: Restarting Visual Studio Rebooting Reinstalling Visual Studio

    Read the article

  • How safe is my safe rethrow?

    - by gustafc
    (Late edit: This question will hopefully be obsolete when Java 7 comes, because of the "final rethrow" feature which seems like it will be added.) Quite often, I find myself in situations looking like this: do some initialization try { do some work } catch any exception { undo initialization rethrow exception } In C# you can do it like this: InitializeStuff(); try { DoSomeWork(); } catch { UndoInitialize(); throw; } For Java, there's no good substitution, and since the proposal for improved exception handling was cut from Java 7, it looks like it'll take at best several years until we get something like it. Thus, I decided to roll my own: (Edit: Half a year later, final rethrow is back, or so it seems.) public final class Rethrow { private Rethrow() { throw new AssertionError("uninstantiable"); } /** Rethrows t if it is an unchecked exception. */ public static void unchecked(Throwable t) { if (t instanceof Error) throw (Error) t; if (t instanceof RuntimeException) throw (RuntimeException) t; } /** Rethrows t if it is an unchecked exception or an instance of E. */ public static <E extends Exception> void instanceOrUnchecked( Class<E> exceptionClass, Throwable t) throws E, Error, RuntimeException { Rethrow.unchecked(t); if (exceptionClass.isInstance(t)) throw exceptionClass.cast(t); } } Typical usage: public void doStuff() throws SomeException { initializeStuff(); try { doSomeWork(); } catch (Throwable t) { undoInitialize(); Rethrow.instanceOrUnchecked(SomeException.class, t); // We shouldn't get past the above line as only unchecked or // SomeException exceptions are thrown in the try block, but // we don't want to risk swallowing an error, so: throw new SomeException("Unexpected exception", t); } private void doSomeWork() throws SomeException { ... } } It's a bit wordy, catching Throwable is usually frowned upon, I'm not really happy at using reflection just to rethrow an exception, and I always feel a bit uneasy writing "this will not happen" comments, but in practice it works well (or seems to, at least). What I wonder is: Do I have any flaws in my rethrow helper methods? Some corner cases I've missed? (I know that the Throwable may have been caused by something so severe that my undoInitialize will fail, but that's OK.) Has someone already invented this? I looked at Commons Lang's ExceptionUtils but that does other things. Edit: finally is not the droid I'm looking for. I'm only interested to do stuff when an exception is thrown. Yes, I know catching Throwable is a big no-no, but I think it's the lesser evil here compared to having three catch clauses (for Error, RuntimeException and SomeException, respectively) with identical code. Note that I'm not trying to suppress any errors - the idea is that any exceptions thrown in the try block will continue to bubble up through the call stack as soon as I've rewinded a few things.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >