Search Results

Search found 1897 results on 76 pages for 'beat detection'.

Page 68/76 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Convert a PDF eBook to ePub Format

    - by Matthew Guay
    Would you like to read a PDF eBook on an eReader or mobile device, but aren’t happy with the performance? Here’s how you can convert your PDFs to the popular ePub format so you can easily read them on any device. PDFs are a popular format for eBooks since they render the same on any device and can preserve the exact layout of the print book.  However, this benefit is their major disadvantage on mobile devices, as you often have to zoom and pan back and forth to see everything on the page.  ePub files, on the other hand, are an increasingly popular option. They can reflow to fill your screen instead of sticking to a strict layout style.  With the free Calibre program, you can quickly convert your PDF eBooks to ePub format. Getting Started Download the Calibre installer (link below) for your operating system, and install as normal.  Calibre works on recent versions of Windows, OS X, and Linux.  The Calibre installer is very streamlined, so the install process was quite quick. Calibre is a great application for organizing your eBooks.  It can automatically sort your books by their metadata, and even display their covers in a Coverflow-style viewer. To add an eBook to your library, simply drag-and-drop the file into the Calibre window, or click Add books at the top.  Here you can choose to add all the books from a folder and more. Calibre will then add the book(s) to your library, import the associated metadata, and organize them in the catalog. Convert your Books Once you’ve imported your books into Calibre, it’s time to convert them to the format you want.  Select the book or books you want to convert, and click Convert E-books.  Select whether you want to convert them individually or bulk convert them. The convertor window has lots of options, so you can get your ePub book exactly like you want.  You can simply click Ok and go with the defaults, or you can tweak the settings. Do note that the conversion will only work successfully with PDFs that contain actual text.  Some PDFs are actually images scanned in from the original books; these will appear just like the PDF after the conversion, and won’t be any easier to read. On the first tab, you’ll notice that Calibri will repopulate most of the metadata fields with info from your PDF.  It will also use the first page of the PDF as the cover.  Edit any of the information that may be incorrect, and add any additional information you want associated with the book. If you want to convert your eBook to a different format other than ePub, Calibri’s got you covered, too.  On the top right, you can choose to output the converted eBook into a many different file formats, including the Kindle-friendly MOBI format. One other important settings page is the Structure Detection tab.  Here you can choose to have it remove headers and footers in the converted book, as well as automatically detect chapter breaks. Click Ok when you’ve finished choosing your settings and Calibre will convert the book.  This may take a few minutes, depending on the size of the PDF.  If the conversion seems to be taking too long, you can click Show job details for more information on the progress.   The conversion usually works good, but we did have one job freeze on us.  When we checked the job details, it indicated that the PDF was copy-protected.  Most PDF eBooks, however, worked fine. Now, back in the main Calibri window, select your book and save it to disk.  You can choose to save only the EPUB format, or you can select Save to disk to save all formats of the book to your computer. You can also view the ePub file directly in Calibri’s built-in eBook viewer.  This is the PDF book we converted, and it looks fairly good in the converted format.  It does have some odd line breaks and some misplaced numbers, but on the whole, the converted book is much easier to read, especially on small mobile devices.   Even images get included inline, so you shouldn’t be missing anything from the original eBook. Conclusion Calibri makes it simple to read your eBooks in any format you need. It is a project that is in constant development, and updates regularly adding better stability and features.  Whether you want to ready your PDF eBooks on a Sony Reader, Kindle, netbook or Smartphone, your books will now be more accessible than ever.  And with thousands of free PDF eBooks out there, you’ll be sure to always have something to read. If you’d like some Geeky PDF eBooks, Microsoft Press is offering a number of free PDF eBooks right now.  Check them out at this link (Account Required). Download the Calibre eBook program Similar Articles Productive Geek Tips Format a String as Currency in C#Convert Older Excel Documents to Excel 2007 FormatShare OneNote 2010 Notebooks with OneNote 2007Install an RPM Package on Ubuntu LinuxConvert PDF Files to Word Documents and Other Formats TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • Solaris 11 Live CD alapú telepítés

    - by AndrasF
    Az elozo részben megigért két telepítési eljárás helyett kénytelen vagyok ebben a bejegyzésben kizárólag a Live CD-s változattal foglalkozni. Korábban nem gondoltam, hogy ennek bemutatása is több, mint 50 képernyo kimenetet igényel, ezért változtatnom kellett a korábbi tervezeten. A Solaris 11 Live CD-s telepítés elsosorban az asztali (desktop) felhasználók igényeit veszi figyelembe és kizárólag x86-os architektúrájú gépeken támogatott (annak ellenére, hogy SPARC-os rendszerek is rendelkeznek grafikus kártyával - pl. T4-1).A folyamat két részre bontható: eloször a vendéggép kerül kialakítása VirtualBox környezetben, majd ezt követi a Solaris 11-es telepítése virtuális gépre. HCL és segédprogramok (DDT, DDU) Mielott telepíteni szeretnénk a Solaris operációs rendszert, célszeru tájékozódni fizikai rendszerünk támogatottságáról. Erre jól használható a már említett hardver kompatibilitási (HCL) lista, vagy az alábbi két segédprogram: Device Detection Tool Device Driver Utility Mindkét alkalmazás képes rendszerünk hardver komponenseit feltérképezni és ellenorizni azok meghajtóprogram (driver) ellátottságát. Eltérés köztük abban nyilvánul meg, hogy míg a DDT futtatásához Java szükséges, addig a DDU Solarist igényel. Ez utóbbiról a telepítés során röviden szó fog esni. Telepíto készletek letöltési helye Hálózati installációtól eltekintve (*) telepítokészletre van szükségünk, mely az alábbi oldalról töltheto le. Célszeru letöltenünk mindhárom állományt és a csomagokat tartalmazó ún. repository médiát (a következo felsorolás utolsó eleme) is: sol-11-1111-live-x86.iso sol-11-1111-text-x86.iso sol-11-1111-ai-x86.iso sol-11-1111-repo-full.iso Az elso három változat indítható USB formátumban is rendelkezésre áll - ekkor iso végzodés helyett usb található a fájlnevek végén. Rövid utalást az egyes készletek feladatáról az elozo blog bejegyzés tartalmaz (link). Amennyiben SPARC architektúrájú rendszerre szeretnénk a telepítést végezni, 'x86' helyett a 'sparc' szöveget tartalmazó állományokra lesz szükség. (*) - arra is lehetoség van, hogy AI készletrol történo indítás segítségével végezzük a hálózaton keresztül történo telepítést. Ez akkor fontos, ha célgépünkön nincs PXE (Preboot Execution Environment) boot támogatás. VirtualBox konfigurálás Külön fizikai eszköz felhasználása nélkül virtuális környezetben is használható a Solaris 11, mint vendéggép. A VirtualBox használatával erre kényelmes lehetoség kínálkozik. Gazdagépünknek (Windows, Unix, Linux) megfelelo telepíto program, vagy programcsomag (jelenleg a 4.1.16-os verzió a legfrissebb változat) és az installációt is taglaló felhasználói kézikönyv letöltheto a termék oldaláról. A sikeres telepítést követoen az alábbi lépések során jutunk el az új virtuális gép kialakulásáig: 1. A VBox indítása után a központi ablak megmutatja a már létezo virtuális gépeinket (Sol11demo, Sol11u1b07, Sol11.1B16, Sun_ZFS_Storage_7000) és az aktuálisan kiválasztott egyed (Sol11demo) fobb jellemzoit (megnevezés, memória mérete, virtuális tároló eszközök listája...stb.) 2. A New gombra kattintva elindul a virtuális gépet létrehozó segéd (wizard) 3. Ezt követoen nevet kell adnunk a vendéggépnek és ki kell választanunk az operációs rendszer típusát (beszédes név használata esetén a VirtualBox képes az operációs rendszer családját kiválasztani, nekünk pusztán csak verziót kell beállítanunk): adjuk meg Solaris11-et névként és válasszuk a 64bites változatot (feltéve, hogy gazdagépünk támogatja ezt) 4. Telepítéshez és a kezdeti lépések megtételéhez 1536MB memória tökéletesen megfelel (ez késobb módosítható az elvárások függvényében) 5. Fizikai társaihoz hasonlóan, egyetlen virtuális gép sem létezhet merevlemez (jelen esetben virtuális diszk) nélkül. Használhatunk egy már létezo területet (virtuális lemezt tartalmazó állomány), de létrehozhatunk egy nekünk tetszo új példányt is. Maradjunk ez utóbbinál (Create new hard disk)! 6. A lehetséges formátumok közül - az egyszeruség okán - éljünk a felkínált alaptípussal (VDI - VirtualBox Disk Image). 7. Létrehozás során a virtuális lemez készülhet egyidejuleg (Fixed size), vagy több lépésben dinamikusan (Dynamically allocated). Az elso változat sokkal kevésbé terheli a rendszert, a második elonye pedig a helytakarékosság. Válasszuk a fix méretu változatot. 8. Most már csak egyetlen adat ismeretlen a VirtualBox számára, mégpedig a létrehozásra kerülo virtuális lemez nagysága. 8GB-os terület jelen esetben alkalmas az ismerkedés elkezdéséhez. 9. Amennyiben minden beállítást helyesen adtunk meg, a Create gomb megnyomása után elindul a virtuális lemez létrehozása. 10. Ez a muvelet a megadott adatoktól függoen néhány perc alatt befejezodik. 11. Hasonló megerosítés (Create gomb aktiválása) után elkezdodik a kért virtuális gép létrehozása is. 12. Sikeres végrehajtás után az új vitruális gép közvetlenül megjelenik a központi ablak baloldali listáján a rendelkezésre álló virtuális gépek közt. A blog bejegyzés folyamatosan frissül...a rész fennmaradó tartalma hamarosan felkerül az oldalra.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • Opposite Force to Apply to a Collided Rigid Body?

    - by Milo
    I'm working on the physics for my GTA2-like game so I can learn more about game physics. The collision detection and resolution are working great. I'm now just unsure how to compute the force to apply to a body after it collides with a wall. My rigid body looks like this: /our simulation object class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; private Vector2D v = new Vector2D(); //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; private Matrix mat = new Matrix(); private float[] Vector2Ds = new float[2]; private Vector2D tangent = new Vector2D(); private static Vector2D worldRelVec = new Vector2D(); private static Vector2D relWorldVec = new Vector2D(); private static Vector2D pointVelVec = new Vector2D(); private static Vector2D acceleration = new Vector2D(); public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; setLayer(LAYER_OBJECTS); } protected void rectChanged() { if(getWorld() != null) { getWorld().updateDynamic(this); } } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position.x,position.y, getWidth(), getHeight(), angle); rectChanged(); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { doUpdate(timeStep); } public void doUpdate(float timeStep) { //integrate physics //linear acceleration.x = forces.x / mass; acceleration.y = forces.y / mass; velocity.x += (acceleration.x * timeStep); velocity.y += (acceleration.y * timeStep); //velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); v.x = getRect().getCenter().getX() + (velocity.x * timeStep); v.y = getRect().getCenter().getY() + (velocity.y * timeStep); setCenter(v.x, v.y); forces.x = 0; //clear forces forces.y = 0; //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { mat.reset(); Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); relWorldVec.x = Vector2Ds[0]; relWorldVec.y = Vector2Ds[1]; return relWorldVec; } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { mat.reset(); Vector2Ds[0] = world.x; Vector2Ds[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vector2Ds); worldRelVec.x = Vector2Ds[0]; worldRelVec.y = Vector2Ds[1]; return worldRelVec; } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { tangent.x = -worldOffset.y; tangent.y = worldOffset.x; pointVelVec.x = (tangent.x * angularVelocity) + velocity.x; pointVelVec.y = (tangent.y * angularVelocity) + velocity.y; return pointVelVec; } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces.x += worldForce.x; forces.y += worldForce.y; //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } public Vector2D getVelocity() { return velocity; } public void setVelocity(Vector2D velocity) { this.velocity = velocity; } } The way it is given force is by the applyForce method, this method considers angular torque. I'm just not sure how to come up with the vectors in the case where: RigidBody hits static entity RigidBody hits other RigidBody that may or may not be in motion. Would anyone know a way (without too complex math) that I could figure out the opposite force I need to apply to the car? I know the normal it is colliding with and how deep it collided. My main goal is so that say I hit a building from the side, well the car should not just stay there, it should slowly rotate out of it if I'm more than 45 degrees. Right now when I hit a wall I only change the velocity directly which does not consider angular force. Thanks!

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution

    - by Kellsey Ruppel
    Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution CUSTOMER AND PARTNER INFORMATION Customer Name – Siemens AG, Sector Healthcare Customer Revenue – 73,515 Billion Euro (2011, Siemens AG total) Customer Quote – “The realization of our complex requirements within a very short amount of time was enabled through the competent implementation partner Sapient, who fully used the  very broad scope of standard functionality provided in the Oracle WebCenter Portal, and the management of customer services, who continuously supported the project setup. ” – Joerg Modlmayr, Project Manager, Healthcare Customer Service Portal, Siemens AG The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging, laboratory diagnostics, medical information technology and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better and more cost-effective. To ensure greater transparency, increased efficiency, higher user acceptance, and additional services, Siemens AG, Sector Healthcare, replaced several existing legacy portal solutions that could not meet the company’s future needs with Oracle WebCenter Portal. Various existing portal solutions that cannot meet future demands will be successively replaced by the new central service portal, which will also allow for the efficient and intuitive implementation of new service concepts.  With Oracle, doctors and hospitals using Siemens medical solutions now have access to a central information portal that provides important information and services at just the push of a button.  Customer Name – Siemens AG, Sector Healthcare Customer URL – www.siemens.com Customer Headquarters – Erlangen, Germany Industry – Industrial Manufacturing Employees – 360,000  Challenges – Replace disparate medical service portals to meet future demands and eliminate an  unnecessarily high level of administrative work caused by heterogeneous installations Ensure portals meet current user demands to improve user-acceptance rates and increase number of total users Enable changes and expansion through standard functionality to eliminate the need for reliance on IT and reduce administrative efforts and associated high costs Ensure efficient and intuitive implementation of new service concepts for all devices and systems Ensure hospitals and clinics to transparently monitor and measure services rendered for the various medical devices and systems  Increase electronic interaction and expand services to achieve a higher level of customer loyalty Solution –  Deployed Oracle WebCenter Portal to ensure greater transparency, and as a result, a higher level of customer loyalty  Provided a centralized platform for doctors and hospitals using Siemens’ medical technology solutions that provides important information and services at the push of a button Reduced significantly the administrative workload by centralizing the solution in the new customer service portal Secured positive feedback from customers involved in the pilot program developed by design experts from Oracle partner Sapient. The interfaces were created with customer needs in mind. The first survey taken shortly after implementation came back with 2.4 points on a scale of 0-3 in the category “customer service portal intuitiveness level” Met all requirements including alignment with the Siemens Style Guide without extensive programming Implemented additional services via the portal such as benchmarking options to ensure the optimal use of the Customer Device Park Provided option for documentation of all services rendered in conjunction with the medical technology systems to ensure that the value of the services are transparent for the decision makers in the hospitals  Saved and stored all machine data from approximately 100,000 remote systems in the central service and information platform Provided the option to register errors online and follow the call status in real-time on the portal Made  available at the push of a button all information on the medical technology devices used in hospitals or clinics—from security checks and maintenance activities to current device statuses Provided PDF format Service Performance Reports that summarize information from periods of time ranging from previous weeks up to one year, meeting medical product law requirements  Why Oracle – Siemens AG favored Oracle for many reasons, however, the company ultimately decided to go with Oracle due to the enormous range of functionality the solutions offered for the healthcare sector.“We are not programmers; we are service providers in the medical technology segment and focus on the contents of the portal. All the functionality necessary for internet-based customer interaction is already standard in Oracle WebCenter Portal, which is a huge plus for us. Having Oracle as our technology partner ensures that the product will continually evolve, providing a strong technology platform for our customer service portal well into the future,” said Joerg Modlmayr project manager, Healthcare Customer Service Portal, Siemens AG. Partner Involvement – Siemens AG selected Oracle Partner Sapient because the company offered a service portfolio that perfectly met Siemens’ requirements and had a wealth of experience implementing Oracle WebCenter Portal. Additionally, Sapient had designers with a very high level of expertise in usability—an aspect that Siemens considered to be of vast importance for the project.  “The Sapient team completely met all our expectations. Our tightly timed project was completed on schedule, and the positive feedback from our users proves that we set the right measures in terms of usability—all thanks to the folks at Sapient,” Modlmayr said.  Partner Name – Sapient GmbH Deutschland Partner URL – www.sapient.com

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • Exposing model object using bindings in custom NSCell of NSTableView

    - by Hooligancat
    I am struggling trying to perform what I would think would be a relatively common task. I have an NSTableView that is bound to it's array via an NSArrayController. The array controller has it's content set to an NSMutableArray that contains one or more NSObject instances of a model class. What I don't know how to do is expose the model inside the NSCell subclass in a way that is bindings friendly. For the purpose of illustration, we'll say that the object model is a person consisting of a first name, last name, age and gender. Thus the model would appear something like this: @interface PersonModel : NSObject { NSString * firstName; NSString * lastName; NSString * gender; int * age; } Obviously the appropriate setters, getters init etc for the class. In my controller class I define an NSTableView, NSMutableArray and an NSArrayController: @interface ControllerClass : NSObject { IBOutlet NSTableView * myTableView; NSMutableArray * myPersonArray; IBOutlet NSArrayController * myPersonArrayController; } Using Interface Builder I can easily bind the model to the appropriate columns: myPersonArray --> myPersonArrayController --> table column binding This works fine. So I remove the extra columns, leaving one column hidden that is bound to the NSArrayController (this creates and keeps the association between each row and the NSArrayController) so that I am down to one visible column in my NSTableView and one hidden column. I create an NSCell subclass and put the appropriate drawing method to create the cell. In my awakeFromNib I establish the custom NSCell subclass: PersonModel * aCustomCell = [[[PersonModel alloc] init] autorelease]; [[myTableView tableColumnWithIdentifier:@"customCellColumn"] setDataCell:aCustomCell]; This, too, works fine from a drawing perspective. I get my custom cell showing up in the column and it repeats for every managed object in my array controller. If I add an object or remove an object from the array controller the table updates accordingly. However... I was under the impression that my PersonModel object would be available from within my NSCell subclass. But I don't know how to get to it. I don't want to set each NSCell using setters and getters because then I'm breaking the whole model concept by storing data in the NSCell instead of referencing it from the array controller. And yes I do need to have a custom NSCell, so having multiple columns is not an option. Where to from here? In addition to the Google and StackOverflow search, I've done the obligatory walk through on Apple's docs and don't seem to have found the answer. I have found a lot of references that beat around the bush but nothing involving an NSArrayController. The controller makes life very easy when binding to other elements of the model entity (such as a master/detail scenario). I have also found a lot of references (although no answers) when using Core Data, but Im not using Core Data. As per the norm, I'm very grateful for any assistance that can be offered!

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • AVFoundation buffer comparison to a saved image

    - by user577552
    Hi, I am a long time reader, first time poster on StackOverflow, and must say it has been a great source of knowledge for me. I am trying to get to know the AVFoundation framework. What I want to do is save what the camera sees and then detect when something changes. Here is the part where I save the image to a UIImage : if (shouldSetBackgroundImage) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage * image = [UIImage imageWithCGImage:quartzImage]; [self setBackgroundImage:image]; NSLog(@"reference image actually set"); // Release the Quartz image CGImageRelease(quartzImage); //Signal that the image has been saved shouldSetBackgroundImage = NO; } and here is the part where I check if there is any change in the image seen by the camera : else { CGImageRef cgImage = [backgroundImage CGImage]; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef bitmapData = CGDataProviderCopyData(provider); char* data = CFDataGetBytePtr(bitmapData); if (data != NULL) { int64_t numDiffer = 0, pixelCount = 0; NSMutableArray * pointsMutable = [NSMutableArray array]; for( int row = 0; row < bufferHeight; row += 8 ) { for( int column = 0; column < bufferWidth; column += 8 ) { //we get one pixel from each source (buffer and saved image) unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); pixelCount++; if ( !match(pixel, referencePixel, matchThreshold) ) { numDiffer++; [pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]]; } } } numberOfPixelsThatDiffer = numDiffer; points = [pointsMutable copy]; } For some reason, this doesn't work, meaning that the iPhone detects almost everything as being different from the saved image, even though I set a very low threshold for detection in the match function... Do you have any idea of what I am doing wrong?

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • Creating a music catalog and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs.

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >