Search Results

Search found 16604 results on 665 pages for 'jd long'.

Page 107/665 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Stop Spinning Your Wheels&hellip; Sage Advice for Aspiring Developers

    - by Mark Rackley
    So… lately I’ve been tasked with helping bring some non-developers over the hump and become full-fledged, all around, SharePoint developers. Well, only time will tell if I’m successful or a complete failure. Good thing about failures though, you know what NOT to do next time! Anyway, I’ve been writing some sort of code since I was about 10 years old; so I sometimes take for granted the effort some people have to go through to learn a new technology. I guess if I had to say I was an “expert” in one thing it would be learning (and getting “stuff” done) in new technologies. Maybe that’s why I’ve embraced SharePoint and the SharePoint community. SharePoint is the first technology I haven’t been able to master or get everything done without help from other people. I KNOW I’ll never know it all and I learn something new every day.  It keeps it interesting, it keeps me motivated, and keeps me involved. So, what some people may consider a downside of SharePoint, I definitely consider a plus. Crap.. I’m rambling. Where was I? Oh yeah… me trying to be helpful. Like I said, I am able to quickly and effectively pick up new languages, technology, etc. and put it to good use. Am I just brilliant? Well, my mom thinks so.. but maybe not. Maybe I’ve just been doing it for a long time…. 25 years in some form or fashion… wow I’m old… Anyway, what I lack in depth I make up for in breadth and being the “go-to” guy wherever I work when someone needs to “get stuff done”.  Let’s see if I can take some of that experience and put it to practical use to help new people get up to speed faster, learn things more effectively, and become that go-to guy. First off…  make sure you… Know The Basics I don’t have the time to teach new developers the basics, but you gotta know them. I’ve only been “taught” two languages.. Fortran 77 and C… everything else I’ve picked up from “doing”. I HAD to know the basics though, and all new developers need to understand the very basics of development.  97.23% of all languages will have the following: Variables Functions Arrays If statements For loops / While loops If you think about it, most development is “if this, do this… or while this, do this…”.  “This” may be some unique method to your language or something you develop, but the basics are the basics. YES there are MANY other development topics you need to understand, but you shouldn’t be scratching your head trying to figure out what a ”for loop” is… (Also learn about classes and hashtables as quickly as possible). Once you have the basics down it makes it much easier to… Learn By Doing This may just apply to me and my warped brain.  I don’t learn a new technology by reading or hearing someone speak about it. I learn by doing. It does me no good to try and learn all of the intricacies of a new language or technology inside-and-out before getting my hands dirty. Just show me how to do one thing… let me get that working… then show me how to do the next thing.. let me get that working… Now, let’s see what I can figure out on my own. Okay.. now it starts to make sense. I see how the language works, I can step through the code, and before you know it.. I’m productive in a new technology. Be careful here though…. make sure you… Don’t Reinvent The Wheel People have been writing code for what… 50+ years now? So, why are you trying to tackle ANYTHING without first Googling it with Bing to see what others have done first? When I was first learning C# (I had come from a Java background) I had to call a web service.  Sure! No problem! I’d done this many times in Java. So, I proceeded to write an HTTP Handler, called the Web Service and it worked like a charm!!!  Probably about 2.3 seconds after I got it working completely someone says to me “Why didn’t you just add a Web Reference?” Really? You can do that?  oops… I just wasted a lot of time. Before undertaking the development of any sort of utility method in a new language, make sure it’s not already handled for you… Okay… you are starting to write some code and are curious about the possibilities? Well… don’t just sit there… Try It And See What Happens This is actually one of my biggest pet peeves. “So… ‘x++’ works in C#, but does it also work in JavaScript?”   Really? Did you just ask me that? In the time it spent for you to type that email, press the send button, me receive the email, get around to reading it, and replying with “yes” you could have tested it 47 times and know the answer! Just TRY it! See what happens! You aren’t doing brain surgery. You aren’t going to kill anyone, and you BETTER not be developing in production. So, you are not going to crash any production systems!! Seriously! Get off your butt and just try it yourself. The extra added benefit is that it doesn’t work, the absolute best way to learn is to… Learn From Your Failures I don’t know about you… but if I screw up and something doesn’t work, I learn A LOT more debugging my problem than if everything magically worked. It’s okay that you aren’t perfect! Not everyone can be me? In the same vein… don’t ask someone else to debug your problem until you have made a valiant attempt to do so yourself. There’s nothing quite like stepping through code line by line to see what it’s REALLY doing… and you’ll never feel more stupid sometimes than when you realize WHY it’s not working.. but you realize... you learn... and you remember. There is nothing wrong with failure as long as you learn from it. As you start writing more and more and more code make sure that you ALWAYS… Develop for Production You will soon learn that the “prototype” you wrote last week to show as a “proof of concept” is going to go directly into production no matter how much you beg and plead and try to explain it’s not ready to go into production… it’s going to go straight there.. and it’s like herpes.. it doesn’t go away and there’s no fixing it once it’s in there.  So, why not write ALL your code like it will be put in production? It MIGHT take a little longer, but in the long run it will be easier to maintain, get help on, and you won’t be embarrassed that it’s sitting on a production server for everyone to use and see. So, now that you are getting comfortable and writing code for production it is important to to remember the… KISS Principle… Learn It… Love It… Keep It Simple Stupid Seriously.. don’t try to show how smart you are by writing the most complicated code in history. Break your problem up into discrete steps and write each step. If it turns out you have some redundancy, you can always go back and tweak your code later.  How bad is it when you write code that LOOKS cocky? I’ve seen it before… some of the most abstract and complicated classes when a class wasn’t even needed! Or the most elaborate unreadable code jammed into one really long line when it could have been written in three lines, performed just as well, and been SOOO much easier to maintain. Keep it clear and simple.. baby steps people. This will help you learn the technology, debug problems, AND it will help others help you find your problems if they don’t have to decipher the Dead Sea Scrolls just to figure out what you are trying to do…. Really.. don’t be that guy… try to curb your ego and… Keep an Open Mind No matter how smart you are… how fast you type… or how much you get paid, don’t let your ego get in the way. There is probably a better way to do everything you’ve ever done. Don’t become so cocky that you can’t think someone knows more than you. There’s a lot of brilliant, helpful people out there willing to show you tricks if you just give them a chance. A very super-awesome developer once told me “So what if you’ve been writing code for 10 years or more! Does your code look basically the same? Are you not growing as a developer?” Those 10 years become pretty meaningless if you just “know” that you are right and have not picked up new tips, tricks, methods, and patterns along the way. Learn from others and find out what’s new in development land (you know you don’t have to specifically use pointers anymore??). Along those same lines… If it’s not working, first assume you are doing something wrong. You have no idea how much it annoys people who are trying to help you when you first assume that the help they are trying to give you is wrong. Just MAYBE… you… the person learning is making some small mistake? Maybe you didn’t describe your problem correctly? Maybe you are using the wrong terminology? “I did exactly what you said and it didn’t work.”  Oh really? Are you SURE about that? “Your solution doesn’t work.”  Well… I’m pretty sure it works, I’ve used it 200 times… What are you doing differently? First try some humility and appreciation.. it will go much further, especially when it turns out YOU are the one that is wrong. When all else fails…. Try Professional Training Some people just don’t have the mindset to go and figure stuff out. It’s a gift and not everyone has it. If everyone could do it I wouldn’t have a job and there wouldn’t be professional training available.  So, if you’ve tried everything else and no light bulbs are coming on, contact the experts who specialize in training. Be careful though, there is bad training out there. Want to know the names of some good places? Just shoot me a message and I’ll let you know. I’m boycotting endorsing Andrew Connell anymore until I get that free course dangit!! So… that’s it.. that’s all I got right now. Maybe you thought all of this is common sense, maybe you think I’m smoking crack. If so, don’t just sit there, there’s a comments section for a reason. Finally, what about you? What tips do you have to help this aspiring to learn the dark arts??

    Read the article

  • On Writing Blogs

    - by Tony Davis
    Why are so many blogs about IT so difficult to read? Over at SQLServerCentral.com, we do a special subscription-only newsletter called Database Weekly. Every other week, it is my turn to look through all the blogs, news and events that might be of relevance to people working with databases. We provide the title, with the link, and a short abstract of what you can expect to read. It is a popular service with close to a million subscribers. You might think that this is a happy and fascinating task. Sometimes, yes. If a blog comes to the point quickly, and says something both interesting and original, then it has our immediate attention. If it backs up what it says with supporting material, then it is more-or-less home and dry, featured in DBW's list. If it also takes trouble over the formatting and presentation, maybe with an illustration or two and any code well-formatted, then we are agog with joy and it is marked as a must-visit destination in our blog roll. More often, however, a task that should be fun becomes a routine chore, and the effort of trawling so many badly-written blogs is enough to make any conscientious Health & Safety officer whistle through their teeth at the risk to the editor's spiritual and psychological well-being. And yet, frustratingly, most blogs could be improved very easily. There is, I believe, a simple formula for a successful blog. First, choose a single topic that is reasonably fresh and interesting. Second, get to the point quickly; explain in the first paragraph exactly what the blog is about, and then stay on topic. In writing the first paragraph, you must picture yourself as a pilot, hearing the smooth roar of the engines as your plane gracefully takes air. Too often, however, the accompanying sound is that of the engine stuttering before the plane veers off the runway into a field, and a wheel falls off. The author meanders around the topic without getting to the point, and takes frequent off-radar diversions to talk about themselves, or the weather, or which friends have recently tagged them. This might work if you're J.D Salinger, or James Joyce, but it doesn't help a technical blog. Sometimes, the writing is so convoluted that we are entirely defeated in our quest to shoehorn its meaning into a simple summary sentence. Finally, write simply, in plain English, and in a conversational way such that you can read it out loud, and sound natural. That's it! If you could also avoid any references to The Matrix then this is a bonus but is purely personal preference. Cheers, Tony.

    Read the article

  • AutoVue at the Oracle Asset Lifecycle Management Summit

    - by celine.beck
    I recently had the opportunity to attend and present the integration between AutoVue and Primavera P6 during the Oracle ALM Summit, which was held in March at Redwood Shores, on Oracle Headquarters grounds. The ALM Summit brought together over 300 Oracle maintenance practitioners who endured the foggy and rainy San Francisco weather to attend the 4th edition of this Oracle-driven conference. Attendees have roles in maintenance management and IT. Following a general session, Ralph Rio from ARC Advisory Group provided a very interesting keynote session discussing Asset Management directions, both in the short and long run. An interesting point that Ralph raised is that most organizations have done a good job at improving performance at the design / build, operate and maintain and portfolio management phases by leveraging solutions like Asset Lifecycle Management and Project & Portfolio management solutions; however, there seem to be room for improvement in between those phases, when information flows from one group to the other, during the data handover phase or when time comes to update / modify drawings to reflect the reality of physical assets. This is where AutoVue comes into play. By integrating with enterprise applications like content management systems, asset lifecycle management applications and project management solutions, AutoVue can be a real-process enabler, streamlining information flows from concept/design to decommissioning and ensuring that all project stakeholders have access to asset information and engineering data throughout the asset lifecycle. AutoVue's built-in digital annotation capabilities allows maintenance workers and technicians to report changes in configuration and visually capture the delta between as-built and as-maintained versions of asset documents. This information can then be easily handed over to engineers who can identify changes and incorporate these modifications into the drawings during the next round of document revisions. PPL Power Generation, an electric utilities headquarted in Allentown, Pennsylvania discussed this usage of AutoVue during an interesting Webcast around AutoVue's role in the Utilities space. After the keynote sessions, participants broke off into product-centric tracks around Oracle's Asset Lifecycle Management solutions (E-Business Suite, PeopleSoft, and JD Edwards). The second day of the conference was the occasion for us to present the integration between AutoVue and Primavera P6 to the Maintenance Summit audience. The presentation was a great success and generated much discussion with partners and customers during breaks. People seemed highly interested in learning more about our plans for integrating AutoVue and Primavera P6 with Oracle's ALM solutions...stay tune for further information on the subject!

    Read the article

  • Oracle AIM, Oracle ABF, and Siebel Results Roadmap Officially Retired as of January 31, 2011

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} It seems somehow appropriate that the first entry of the Oracle® Unified Method (OUM) blog is about the retirement of several of our legacy methods, most notably AIM Foundation.If you're reading this, you're probably aware that Oracle has been developing OUM to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product. As Oracle has continued to acquire new companies and technologies, it has become essential that we also create a single, unified language and approach for implementation - across the Oracle ecosystem.With the release of OUM 5.1 in 2009, OUM provided full support for all enterprise application implementation projects including Oracle E-Business Suite R12, Siebel CRM, PeopleSoft Enterprise, and JD Edwards EnterpriseOne projects. In 2010, we released OUM training that supports the use of OUM on these types of projects.That support represented a major milestone in the evolution of OUM and enabled implementers to transition to OUM. Consequently, we announced a staggered retirement schedule for Oracle's legacy methods. On January 31, 2011 we announced the retirement of:Oracle Application Implementation Method (AIM)Oracle AIM for Business Flows (ABF)Siebel Results RoadmapLater this year, we will announce the retirement of Compass - the legacy PeopleSoft method - and Data Warehouse Method Fast Track.OUM is available free of charge to Oracle Gold, Platinum, and Diamond partners through the Oracle Partner Network (OPN) [OUM on OPN]. The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for an engagement of two weeks or longer meeting some additional minimum criteria.There be more retirement announcements in the coming months. For now it's "Adios AIM." Thanks for the memories...

    Read the article

  • Understanding Oracle: Demystifying OpenWorld

    - by mseika
    Seminar: Wednesday 24th October 2012: Avnet, Bracknell Oracle OpenWorld is the world's largest event dedicated to helping enterprises harness the power of technology, during a full week in October. Oracle Corporation always uses Oracle OpenWorld to make its most important product announcements, and this year is no exception. We realise that not all our partners can attend this prestigious event in San Francisco, primarily due to time and cost pressures. Oracle OpenWorld is the only conference that goes this deep and wide with Oracle technology, providing thousands of sessions and hundreds of demonstrations geared toward helping partners and customers get better results with the technology it has —and plan strategically for the technology it will need to keep ahead of the competition in the years to come. With the sheer number of announcements planned, it is sometimes difficult to find your way through the fog and identify the opportunities relevant to your business to take advantage of, this coming year. So why not engage with the Oracle's UK team via Avnet and get the announcements shared with you face-to-face, in the UK? As a key Value Added Distributor of Oracle Applications, Technology and Hardware solutions, Avnet has been attending Oracle OpenWorld for a number of years and invites our partners to attend a half day summary event which will share the keynote announcements. We will also help prioritise for you the announcements of greatest interest and business opportunity for the UK channel. Agenda Time Module 12:00-13:15 Registration and lunch 13:15-14:00 Introductions and Key Hardware announcements Discover how Oracle's complete and integrated application-aware virtualization solutions, including virtualization for SPARC and x86 architectures, can help you gain better efficiencies across your business. Get updates on how Oracle storage products and solutions can accelerate database performance, improve application responsiveness, and meet your data protection needs. 14:00-14:15 Q&A and Break 14:15-15:00 Key Technology announcements Technology products, encompassing Oracle's Database 12c and Middleware, are revolutionizing the industry with record-breaking performance, helping customers consolidate onto private clouds and achieve high returns on investment. 15:00-15:15 Q&A and Break 15:15-16:00 Key Applications announcements Presentations focused on Oracle's strategy and vision for its applications business, including Oracle E-Business Suite; Oracle's PeopleSoft, JD Edwards, Siebel, Hyperion, and Agile products; and the newly available Oracle Fusion Applications. 16:00-16:30 Oracle-on-Oracle announcements & business opportunities with Avnet Learn about Oracle's cloud computing and Oracle-on-Oracle strategies and find out more about Oracle's engineered systems for the broad market 16:30 Close * Please note agenda may be subject to change What do you need to do now Register now or for more information email our Oracle events team at [email protected]. N.B. Places are limited, so please register early to avoid disappointment.

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

  • Oracle 5th Annual Maintenance Summit - Orlando March 22-23, 2011

    - by stephen.slade(at)oracle.com
    It's not too late to register today or tomorrow for this exclusive 'Maintenance Professionals Only" event.  In 4 tracks, 27 customer and partner speakers will present case studies and success stories in these 'no-sell zone' sessions. The take-aways will be worth attending!This "2 in 1" event combines a Customer Showcase featuring Orlando Utilities Commission (OUC) and Maintenance Summit.  OUC - the local municipal utility providing residential, commercial, and industrial customers with clean, reliable, and affordable electric and water services - will open the event with their CIO as keynote speaker, and host tours of their fleet, facility, and power generation operations. Recognized as a green leader, OUC has been the most reliable power provider in Florida the past 9 years due, in large part, to the operational efficiencies of its plant and asset maintenance systems. This Summit will feature breakout session tracks for EBS, JD Edwards, PeopleSoft and Sustainability. Highlights include over 12 Oracle solution demo stations, over 25 interactive breakout sessions, pool-side networking reception with live band, partner exhibit pavilion and special appearance by Sean D. Tucker, Team Oracle Stunt-Pilot!  Dates:                   March 22-23, 2011 Location:             Orlando World Center Marriott, Orlando, Florida Evite:                     http://www.oracle.com/us/dm/h2fy11/65971-nafm10019768mpp191c003-oem-304204.html Highlights:          Keynotes, Oracle Expert Demo Stations, Interactive Breakout Sessions, Networking Reception, Partner Pavilion, Speakers Tracks:                 EBS, JDE, PSFT, Sustainability Tours:                  Orlando Utility Operations, Fleet and Facility Oracle Demo Stations:  Agile, AutoVue, Primavera, MOC/SSDM, Utilities, PIM, PDQ, UCM, On Demand, Business Accelerators, Facilities Work Management, EBS Enterprise Asset Management, PeopleSoft Maintenance Management, Technology, Hardware/Sun. Partner-Sponsors:   Viziya, Global PTM, MiPro, Asset Management Solutions, Venutureforth, Impac Services, EAM Master, LLC, Meridium

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • Sustainability Activities at Oracle OpenWorld

    - by Evelyn Neumayr
    Close to 50,000 participants will come to San Francisco for Oracle OpenWorld and JavaOne events, held September 30-October 4, 2012 at Moscone Center. Oracle is very conscious of the impact that these events have on the environment and, as part of its ongoing commitment to sustainability, has developed a sustainable event program-now in its fifth year-that aims to maximize positive benefits and minimize negative impacts in a variety of ways. Click here for more details. At the Oracle OpenWorld conference, there will be many sessions and even a hands-on lab which discuss the sustainability solutions that Oracle provides for our customers. I wanted to highlight a few of those sessions here so if you will be at Oracle OpenWorld, you can make sure to attend them. One of the most compelling sessions promises to be our “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” session on Wednesday, October 3 from 10:15 a.m. to 11:15 a.m. in Moscone West 3005. Oracle Chairman of the Board Jeff Henley, Chief Sustainability Officer Jon Chorley, and other Oracle executives will honor select customers with Oracle's Eco-Enterprise Innovation award. This award recognizes customers and their respective partners who rely on Oracle products to support their green business practices in order to reduce their environmental impact, while improving business efficiencies and reducing costs. Another interesting session is the “Tracking, Reporting, and Reducing Environmental Impact with Oracle Solutions” which occurs on Monday, October 1 from 4:45 p.m. to 5:45 p.m. in Moscone West Room 2022. This session covers Oracle’s overall sustainability strategy as well as Oracle Environmental Accounting and Reporting (EA&R), which leverages Oracle ERP and BI solutions for accurate, efficient tracking of energy, emissions, and other environmental data. If you want more details, make sure to visit the hands-on lab titled “Oracle Environmental Accounting & Reporting for Integrated Sustainability Reporting”. This hour-long lab will take place on Tuesday, October 2 at 5:00 p.m. in the Marriott Marquis Hotel-Nob Hill CD. Here you can learn how to use Oracle EA&R to collect sustainability-related data in an efficient and reliable manner as part of existing business processes in Oracle E-Business Suite or JD Edwards Enterprise One. Register for this hands-on lab here.  

    Read the article

  • WPF CheckBox style with the TextWrapping

    - by Shurup
    I need you a TextWrapping in the WPF CheckBox. Please look at this two samples: <CheckBox> <TextBlock TextWrapping="Wrap" Text="_This is a long piece of text attached to a checkbox."/> </CheckBox> <CheckBox> <AccessText TextWrapping="Wrap" Text="_This is a long piece of text attached to a checkbox."/> </CheckBox> If I use a TextBlock in the Content of the CheckBox, the check element (vertical alignment is top) and the text displays properly, but not the accelerator. If I use an AccessText in the Content of the CheckBox, the check element displays wrong (vertical alignment is center). How to change the style of the elements to display this CheckBox correct?

    Read the article

  • Problem setting up Master-Master Replication in MySQL

    - by Andrew
    I am attempting to setup Master-Master Replication on two MySQL database servers. I have followed the steps in this guide, but it fails in the middle of Step 4 with SHOW MASTER STATUS; It simply returns an empty set. I get the same 3 errors in both servers' logs. MySQL errors on SQL1: [ERROR] Failed to open the relay log './sql1-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure MySQL Errors on SQL2: [ERROR] Failed to open the relay log './sql2-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure The errors make no sense because I'm not referencing those files in any of my configurations. I'm using Ubuntu Server 10.04 x64 and my configuration files are copied below. I don't know where to go from here or how to troubleshoot this. Please help. Thanks. /etc/mysql/my.cnf on SQL1: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL1's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = <SQL2's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ /etc/mysql/my.cnf on SQL2: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL2's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 2 master-host = <SQL1's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • IBatis: "Unable to cast object of type 'Castle.Proxies.IDaoProxy' to type 'SysProt.Dao.ICustomerDao'."

    - by j_maly
    I am trying to set up IBatis.NET. I have downloaded the sources from http://mybatisnet.googlecode.com/svn/branches/ibatis-1-maintenance/src. This is my initialization DomDaoManagerBuilder builder = new DomDaoManagerBuilder(); builder.Configure("dao.config"); IDaoManager daoManager = DaoManager.GetInstance("SqlMapDao"); customerDao = daoManager[typeof(ICustomerDao)]; ICustomerDao cd = (ICustomerDao) customerDao; The last line throws InvalidCastException "Unable to cast object of type 'Castle.Proxies.IDaoProxy' to type 'SysProt.Dao.ICustomerDao'." I am not sure, what I did wrong, my dao.config files contains Here are the definitions of the classes/interfaces: public interface ICustomerDao { Customer Load(long id); } public class CustomerDao: BaseDao, ICustomerDao { public Customer Load(long id) { throw new NotImplementedException(); } } public class BaseDao : IDao { protected DaoSession GetContext() { IDaoManager daoManager = DaoManager.GetInstance(this); return (daoManager.LocalDaoSession as DaoSession); } }

    Read the article

  • why is LZMA SDK (7-zip) so slow

    - by Tono Nam
    I found 7-zip great and I will like to use it on .net applications. I have a 10MB file (a.001) and it takes: 2 seconds to encode. Now it will be nice if I could do the same thing on c#. I have downloaded http://www.7-zip.org/sdk.html LZMA SDK c# source code. I basically copied the CS directory into a console application in visual studio: Then I compiled and eveything compiled smoothly. So on the output directory I placed the file a.001 which is 10MB of size. On the main method that came on the source code I placed: [STAThread] static int Main(string[] args) { // e stands for encode args = "e a.001 output.7z".Split(' '); // added this line for debug try { return Main2(args); } catch (Exception e) { Console.WriteLine("{0} Caught exception #1.", e); // throw e; return 1; } } when I execute the console application the application works great and I get the output a.7z on the working directory. The problem is that it takes so long. It takes about 15 seconds to execute! I have also tried http://stackoverflow.com/a/8775927/637142 approach and it also takes very long. Why is it 10 times slower than the actual program ? Also Even if I set to use only one thread: It still takes much less time (3 seconds vs 15): (Edit) Another Possibility Could it be because C# is slower than assembly or C ? I notice that the algorithm does a lot of heavy operations. For example compare these two blocks of code. They both do the same thing: C void main() { time_t now; int i,j,k,x; long counter ; counter = 0; now = time(NULL); /* LOOP */ for(x=0; x<10; x++) { counter = -1234567890 + x+2; for (j = 0; j < 10000; j++) for(i = 0; i< 1000; i++) for(k =0; k<1000; k++) { if(counter > 10000) counter = counter - 9999; else counter= counter +1; } printf (" %d \n", time(NULL) - now); // display elapsed time } printf("counter = %d\n\n",counter); // display result of counter printf ("Elapsed time = %d seconds ", time(NULL) - now); gets("Wait"); } output c# static void Main(string[] args) { DateTime now; int i, j, k, x; long counter; counter = 0; now = DateTime.Now; /* LOOP */ for (x = 0; x < 10; x++) { counter = -1234567890 + x + 2; for (j = 0; j < 10000; j++) for (i = 0; i < 1000; i++) for (k = 0; k < 1000; k++) { if (counter > 10000) counter = counter - 9999; else counter = counter + 1; } Console.WriteLine((DateTime.Now - now).Seconds.ToString()); } Console.Write("counter = {0} \n", counter.ToString()); Console.Write("Elapsed time = {0} seconds", DateTime.Now - now); Console.Read(); } Output Note how much slower was c#. Both programs where run from outside visual studio on release mode. Maybe that is the reason why it takes so much longer in .net than on c++. Conclusion I cannot seem to know what is causing the problem. I guess I will use 7z.dll and invoke the necessary methods from c#. A library that does that is at: http://sevenzipsharp.codeplex.com/ and that way I am using the same library that 7zip is using as: // dont forget to add reference to SevenZipSharp located on the link I provided static void Main(string[] args) { // load the dll SevenZip.SevenZipCompressor.SetLibraryPath(@"C:\Program Files (x86)\7-Zip\7z.dll"); SevenZip.SevenZipCompressor compress = new SevenZip.SevenZipCompressor(); compress.CompressDirectory("MyFolderToArchive", "output.7z"); }

    Read the article

  • Why won't OpenCV compile in NVCC?

    - by zenna
    Hi there I am trying to integrate CUDA and openCV in a project. Problem is openCV won't compile when NVCC is used, while a normal c++ project compiles just fine. This seems odd to me, as I thought NVCC passed all host code to the c/c++ compiler, in this case the visual studio compiler. The errors I get are? c:\opencv2.0\include\opencv\cxoperations.hpp(1137): error: no operator "=" matches these operands operand types are: const cv::Range = cv::Range c:\opencv2.0\include\opencv\cxoperations.hpp(2469): error: more than one instance of overloaded function "std::abs" matches the argument list: function "abs(long double)" function "abs(float)" function "abs(double)" function "abs(long)" function "abs(int)" argument types are: (ptrdiff_t) So my question is why the difference considering the same compiler (should be) is being used and secondly how I could remedy this.

    Read the article

  • Play Framework: Error getting sequence nextval using H2 in-memory database

    - by alexhanschke
    As the title suggests, I get an error running Play 2.0.1 Tests using a FakeApplication w/ H2 in memory. I set up a basic unit test: public class ModelTest { @Test public void checkThatIndustriesExist() { running(fakeApplication(inMemoryDatabase()), new Runnable() { public void run() { Industry industry = new Industry(); industry.name = "Some name"; industry.shortname = "some-name"; industry.save(); assertThat(Industry.find.all()).hasSize(1); } }); } Which yields the following exception: [info] test.ModelTest [error] Test test.ModelTest.checkThatIndustriesExist failed: Error getting sequence nextval [error] at com.avaje.ebean.config.dbplatform.SequenceIdGenerator.getMoreIds(SequenceIdGenerator.java:213) [error] at com.avaje.ebean.config.dbplatform.SequenceIdGenerator.loadMoreIds(SequenceIdGenerator.java:163) [error] at com.avaje.ebean.config.dbplatform.SequenceIdGenerator.nextId(SequenceIdGenerator.java:118) [error] at com.avaje.ebeaninternal.server.deploy.BeanDescriptor.nextId(BeanDescriptor.java:1218) [error] at com.avaje.ebeaninternal.server.persist.DefaultPersister.setIdGenValue(DefaultPersister.java:1304) [error] at com.avaje.ebeaninternal.server.persist.DefaultPersister.insert(DefaultPersister.java:403) [error] at com.avaje.ebeaninternal.server.persist.DefaultPersister.saveEnhanced(DefaultPersister.java:345) [error] at com.avaje.ebeaninternal.server.persist.DefaultPersister.saveRecurse(DefaultPersister.java:315) [error] at com.avaje.ebeaninternal.server.persist.DefaultPersister.save(DefaultPersister.java:282) [error] at com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1577) [error] at com.avaje.ebeaninternal.server.core.DefaultServer.save(DefaultServer.java:1567) [error] at com.avaje.ebean.Ebean.save(Ebean.java:538) [error] at play.db.ebean.Model.save(Model.java:76) [error] at test.ModelTest$1.run(ModelTest.java:24) [error] at play.test.Helpers.running(Helpers.java:277) [error] at test.ModelTest.checkThatIndustriesExist(ModelTest.java:21) [error] ... [error] Caused by: org.h2.jdbc.JdbcSQLException: Syntax Fehler in SQL Befehl "SELECT INDUSTRY_SEQ.NEXTVAL UNION[*] SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL "; erwartet "identifier" [error] Syntax error in SQL statement "SELECT INDUSTRY_SEQ.NEXTVAL UNION[*] SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL UNION SELECT INDUSTRY_SEQ.NEXTVAL "; expected "identifier"; SQL statement: [error] select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval union select industry_seq.nextval [42001-158] [error] at org.h2.message.DbException.getJdbcSQLException(DbException.java:329) [error] at org.h2.message.DbException.get(DbException.java:169) [error] at org.h2.message.DbException.getSyntaxError(DbException.java:194) [error] at org.h2.command.Parser.readColumnIdentifier(Parser.java:2777) [error] at org.h2.command.Parser.readTermObjectDot(Parser.java:2336) [error] at org.h2.command.Parser.readTerm(Parser.java:2453) [error] at org.h2.command.Parser.readFactor(Parser.java:2035) [error] at org.h2.command.Parser.readSum(Parser.java:2022) [error] at org.h2.command.Parser.readConcat(Parser.java:1995) [error] at org.h2.command.Parser.readCondition(Parser.java:1860) [error] at org.h2.command.Parser.readAnd(Parser.java:1841) [error] at org.h2.command.Parser.readExpression(Parser.java:1833) [error] at org.h2.command.Parser.parseSelectSimpleSelectPart(Parser.java:1746) [error] at org.h2.command.Parser.parseSelectSimple(Parser.java:1778) [error] at org.h2.command.Parser.parseSelectSub(Parser.java:1673) [error] at org.h2.command.Parser.parseSelectUnion(Parser.java:1518) [error] at org.h2.command.Parser.parseSelect(Parser.java:1506) [error] at org.h2.command.Parser.parsePrepared(Parser.java:405) [error] at org.h2.command.Parser.parse(Parser.java:279) [error] at org.h2.command.Parser.parse(Parser.java:251) [error] at org.h2.command.Parser.prepareCommand(Parser.java:217) [error] at org.h2.engine.Session.prepareLocal(Session.java:415) [error] at org.h2.engine.Session.prepareCommand(Session.java:364) [error] at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1119) [error] at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:71) [error] at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:267) [error] at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:820) [error] at com.avaje.ebean.config.dbplatform.SequenceIdGenerator.getMoreIds(SequenceIdGenerator.java:193) [error] ... 80 more My model looks like this: @Entity @Table(name = "industry") public class Industry extends Model { @Id public Long id; public String name; public String shortname; // called in the view to trigger lazy-loading public String getName() { return name; } public static Finder<Long, Industry> find = new Finder<Long, Industry>(Long.class, Industry.class); } ... and finally the relevant part from my initial evolution: create table industry ( id bigint not null, name varchar(255), shortname varchar(255), constraint pk_industry primary key (id) } create sequence industry_seq start with 1000; Everything works fine running on my PostgreSQL DB, and from my point of view the code is not any different from the Play2.0 Computer Database Sample. I am happy for any help - thanks! Regards, Alex

    Read the article

  • iPad issue with a modal view: modal view label null after view controller is created

    - by iPhone Guy
    This is a weird issue. I have created a view controller with a nib file for my modal view. On that view there is a label, number and text view. When I create the view from the source view, I tried to set the label, but it shows that the label is null (0x0). Kinda weird... Any suggestions? Now lets look at the code (I put all of the code here because that shows more than I can just explain): The modal view controller - in IB the label is connected to the UILabel object: @implementation ModalViewController @synthesize delegate; @synthesize goalLabel, goalText, goalNumber; // Done button clicked - (void)dismissView:(id)sender { // Call the delegate to dismiss the modal view if ([delegate respondsToSelector:@selector(didDismissModalView: newText:)]) { NSNumber *tmpNum = goalNumber; NSString *tmpString = [[NSString alloc] initWithString:[goalText text]]; [delegate didDismissModalView:tmpNum newText:tmpString]; [tmpNum release]; [tmpString release]; } } - (void)cancelView:(id)sender { // Call the delegate to dismiss the modal view if ([delegate respondsToSelector:@selector(didCancelModalView)]) [delegate didCancelModalView]; } -(void) setLabelText:(NSString *)text { [goalLabel setText:text]; } /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if ((self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])) { // Custom initialization } return self; } */ -(void) viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; // bring up the keyboard.... [goalText becomeFirstResponder]; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; // set the current goal number to -1 so we know none was set goalNumber = [NSNumber numberWithInt: -1]; // Override the right button to show a Done button // which is used to dismiss the modal view self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(dismissView:)] autorelease]; // and now for the cancel button self.navigationItem.leftBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemCancel target:self action:@selector(cancelView:)] autorelease]; self.navigationItem.title = @"Add/Update Goals"; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Overriden to allow any orientation. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end And here is where the view controller is created, variables set, and displayed: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // put a checkmark.... UITableViewCell *tmpCell = [tableView cellForRowAtIndexPath:indexPath]; [tmpCell setAccessoryType:UITableViewCellAccessoryCheckmark]; // this is where the popup is gonna popup! // ===> HEre We Go! // Create the modal view controller ModalViewController *mdvc = [[ModalViewController alloc] initWithNibName:@"ModalDetailView" bundle:nil]; // We are the delegate responsible for dismissing the modal view [mdvc setDelegate:self]; // Create a Navigation controller UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:mdvc]; // set the modal view type navController.modalPresentationStyle = UIModalPresentationFormSheet; // set the label for all of the goals.... if (indexPath.section == 0 && indexPath.row == 0) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Long Term Goal 1:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:1]]; } if (indexPath.section == 0 && indexPath.row == 1) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Long Term Goal 2:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:2]]; } if (indexPath.section == 0 && indexPath.row == 2) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Long Term Goal 3:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:3]]; } if (indexPath.section == 0 && indexPath.row == 3) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Long Term Goal 4:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:4]]; } if (indexPath.section == 1 && indexPath.row == 0) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Short Term Goal 1:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:5]]; } if (indexPath.section == 1 && indexPath.row == 1) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Short Term Goal 2:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:6]]; } if (indexPath.section == 1 && indexPath.row == 2) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Short Term Goal 3:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:7]]; } if (indexPath.section == 1 && indexPath.row == 3) { [mdvc setLabelText:[[[NSString alloc] initWithString:@"Short Term Goal 4:"] autorelease]]; [mdvc setGoalNumber:[NSNumber numberWithInt:8]]; } // show the navigation controller modally [self presentModalViewController:navController animated:YES]; // Clean up resources [navController release]; [mdvc release]; // ==> Ah... we are done... }

    Read the article

  • Is there a way to avoid putting the Perl version number into the "use lib" line for Perl modules in

    - by Kinopiko
    I am trying to install some Perl modules into a non-standard location, let's call it /non/standard/location. In the script which uses the module, it seems to be necessary to specify a long directory path including the version of Perl, like so: #!/usr/local/bin/perl use lib '/non/standard/location/lib/perl5/site_perl/5.8.9/'; use A::B; Is there any use lib or other statement which I can use which is not so long and verbose, and which does not include the actual version of Perl, in order that I don't have to go back and edit this out of the program if the version of Perl is upgraded?

    Read the article

  • JPQL: unknown state or association field (EclipseLink)

    - by Kawu
    I have an Employee entity which inherits from Person and OrganizationalUnit: OrganizationalUnit: @MappedSuperclass public abstract class OrganizationalUnit implements Serializable { @Id private Long id; @Basic( optional = false ) private String name; public Long getId() { return this.id; } public void setId( Long id ) { this.id = id; } public String getName() { return this.name; } public void setName( String name ) { this.name = name; } // ... } Person: @MappedSuperclass public abstract class Person extends OrganizationalUnit { private String lastName; private String firstName; public String getLastName() { return this.lastName; } public void setLastName( String lastName ) { this.lastName = lastName; } public String getFirstName() { return this.firstName; } public void setFirstName( String firstName ) { this.firstName = firstName; } /** * Returns names of the form "John Doe". */ @Override public String getName() { return this.firstName + " " + this.lastName; } @Override public void setName( String name ) { throw new UnsupportedOperationException( "Name cannot be set explicitly!" ); } /** * Returns names of the form "Doe, John". */ public String getFormalName() { return this.lastName + ", " + this.firstName; } // ... } Employee entity: @Entity @Table( name = "EMPLOYEES" ) @AttributeOverrides ( { @AttributeOverride( name = "id", column = @Column( name = "EMPLOYEE_ID" ) ), @AttributeOverride( name = "name", column = @Column( name = "LASTNAME", insertable = false, updatable = false ) ), @AttributeOverride( name = "firstName", column = @Column( name = "FIRSTNAME" ) ), @AttributeOverride( name = "lastName", column = @Column( name = "LASTNAME" ) ), } ) @NamedQueries ( { @NamedQuery( name = "Employee.FIND_BY_FORMAL_NAME", query = "SELECT emp " + "FROM Employee emp " + "WHERE emp.formalName = :formalName" ) } ) public class Employee extends Person { @Column( name = "EMPLOYEE_NO" ) private String nbr; // lots of other stuff... } I then attempted to find an employee by its formal name, e.g. "Doe, John" using the query above: SELECT emp FROM Employee emp WHERE emp.formalName = :formalName However, this gives me an exception on deploying to EclipseLink: Exception while preparing the app : Exception [EclipseLink-8030] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.JPQLException Exception Description: Error compiling the query [Employee.FIND_BY_CLIENT_AND_FORMAL_NAME: SELECT emp FROM Employee emp JOIN FETCH emp.client JOIN FETCH emp.unit WHERE emp.client.id = :clientId AND emp.formalName = :formalName], line 1, column 115: unknown state or association field [formalName] of class [de.bnext.core.common.entity.Employee]. Local Exception Stack: Exception [EclipseLink-8030] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.JPQLException Exception Description: Error compiling the query [Employee.FIND_BY_CLIENT_AND_FORMAL_NAME: SELECT emp FROM Employee emp JOIN FETCH emp.client JOIN FETCH emp.unit WHERE emp.client.id = :clientId AND emp.formalName = :formalName], line 1, column 115: unknown state or association field [formalName] of class [de.bnext.core.common.entity.Employee]. Qs: What's wrong? Is it prohibited to use "artificial" properties in JPQL, here the WHERE clause? What are the premises here? I checked the capitalization and spelling many times, I'm out of luck.

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • c# Network Programming - HTTPWebRequest Scraping

    - by masterguru
    Hi, I am building a web scraping application. It should scrape a complex web site with concurrent HttpWebRequests from a single host to a single target web server. The application should run on Windows server 2008. One single HttpWebRequest for data could take from 1 minute to 4 minutes to complete (because of long running db operations) I should have at least 100 parallel requests to the target web server, but i have noticed that when i use more then 2-3 long-running requests i have big performance issues (request timeouts/hanging). How many concurrent requests can i have in this scenario from a single host to a single target web server? can i use Thread Pools in the application to run parallel HttpWebRequests to the server? will i have any issues with the default outbound HTTP connection/requests limits? what about Request timeouts when i reach outbound connection limits? what would be the best setup for my scenario? Any help would be appreciated. Thanks

    Read the article

  • JLabel animation in JPanel

    - by Trizicus
    After scratching around I found that it's best to implement a custom image component by extending a JLabel. So far that has worked great as I can add multiple "images" (jlabels without the layout breaking. I just have a question that I hope someone can answer for me. I noticed that in order to animate JLabels across the screen I need to setlayout(null); and setbounds of the component and then to animate eventually setlocation(x,y);. Is this a best practice or a terrible way to animate a component? I plan on eventually making an animation class but I don't want to do so and end up having to chuck it. I have included relevant code for a quick review check. import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JPanel; import javax.swing.Timer; public class GraphicsPanel extends JPanel { private Timer timer; private long startTime = 0; private int numFrames = 0; private float fps = 0.0f; private int x = 0; GraphicsPanel() { final Entity ent1 = new Entity(); ent1.setBounds(x, 0, ent1.getWidth(), ent1.getHeight()); add(ent1); //ESSENTIAL setLayout(null); //GAMELOOP timer = new Timer(30, new ActionListener() { public void actionPerformed(ActionEvent e) { getFPS(); incX(); ent1.setLocation(x, 0); repaint(); } }); timer.start(); } public void incX() { x++; } @Override public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D) g.create(); g2.setClip(0, 0, getWidth(), getHeight()); g2.setColor(Color.BLACK); g2.drawString("FPS: " + fps, 1, 15); } public void getFPS() { ++numFrames; if (startTime == 0) { startTime = System.currentTimeMillis(); } else { long currentTime = System.currentTimeMillis(); long delta = (currentTime - startTime); if (delta > 1000) { fps = (numFrames * 1000) / delta; numFrames = 0; startTime = currentTime; } } } } Thank you!

    Read the article

  • the scope of a pointer ???

    - by numerical25
    Ok, so I did find some questions that were almost similar but they actually confused me even more about pointers. http://stackoverflow.com/questions/2715198/c-pointer-objects-vs-non-pointer-objects-closed In the link above, they say that if you declare a pointer it is actually saved on the heap and not on the stack, regardless of where it was declared at. Is this true ?? Or am I misunderstanding ??? I thought that regardless of a pointer or non pointer, if its a global variable, it lives as long as the application. If its a local variable or declared within a loop or function, its life is only as long as the code within it.

    Read the article

  • Dynamically change the width of Datagrid column in FLEX.

    - by user120118
    Hi, Can we change the width of the datagrid column dynamically by clicking on the border of the column in order to display the complete string which is too long to be displayed and needs to be scrolled ? If so, How ? Also, how can we ensure that the column width changes dynamically based on the number of characters / length of string; since many a times the data is too long to be displayed. Can we set the column width to take the length of data into consideration before displaying onto the datagrid ?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >