Search Results

Search found 74 results on 3 pages for 'confess'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • How to query a CGI based webserver from an app written in MFC (MSVC 2008) and process the result?

    - by shan23
    Hi, I am exploring the option of querying a web-page, which has a CGI script running on its end, with a search string (say in the form of http://sw.mycompany.com/~tools/cgi-bin/script.cgi?param1=value1&param2=value2&param3=value3 ), and displaying the result on my app (after due processing of course). My app is written in MFC C++ , and I must confess that I have never attempted anything related to network programming before. Is what I'm trying to do very infeasible ? If not, could anyone point me at the resources I need to look at in order to go about this ? Thanks !

    Read the article

  • MS Access vs SQL Server and others ? Is it worth taking a db server when less than 2 Gb and only 20

    - by asksuperuser
    After my experiment with MSAccess vs MySQL which shows MS Access hugely overperforming Mysql odbc insert by a factor 1000% before I would do the same experiment with SQL Server I searched for some other's people and found this one: http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/ which says "As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth." So is worth bother with some db server when data is less than 2 Gb and users are about 20 (knowing that MS Access theorically supports up to 255 concurrent users though practically it's around a dozen concurrent users only). Are there any real world studies that really compare MS Access with other db in these specific use case ? Because professionaly speaking I keep hearing people systematically recommend DB server from people who have never used Access just because they think DB Server can only perform better in every case which I used to think myself I confess.

    Read the article

  • Serialize an object to string

    - by Vaccano
    I have the following method to save an Object to a file: // Save an object out to the disk public static void SerializeObject<T>(this T toSerialize, String filename) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); TextWriter textWriter = new StreamWriter(filename); xmlSerializer.Serialize(textWriter, toSerialize); textWriter.Close(); } I confess I did not write it (I only converted it to a extension method that took a type parameter). Now I need it to give the xml back to me as a string (rather than save it to a file). I am looking into it, but I have not figured it out yet. I thought this might be really easy for someone familiar with these objects. If not I will figure it out eventually.

    Read the article

  • git merge, git rebase none seems to work, should I delete my github fork and refork from the upstream master?

    - by Joan Yin
    I have to confess my github sins. 4 month ago, I forked a upstream repo, without knowing much of git and pull request, i did some work on master branch locally, later on I realized the mistake, created a new branch, and squashed the changes to one and successfully send a PR later from that branch. the PR is accepted, and I moved on. Now I need to submit another PR. But my master branch is so messed up, when I do merge, or rebase, there are so many mistakes. I probably committed a few more sins this morning. I have been battling this for the whole morning now. so it comes to the point that I want a clean start. Can I delete the github fork and refork from the upstream master? What are the correct steps?

    Read the article

  • Serialze an Object to a String

    - by Vaccano
    I have the following method to save an Object to a file: // Save an object out to the disk public static void SerializeObject<T>(this T toSerialize, String filename) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); TextWriter textWriter = new StreamWriter(filename); xmlSerializer.Serialize(textWriter, toSerialize); textWriter.Close(); } I confess I did not write it (I only converted it to a extension method that took a type parameter). Now I need it to give the xml back to me as a string (rather than save it to a file). I am looking into it, but I have not figured it out yet. I thought this might be really easy for someone familiar with these objects. If not I will figure it out eventually.

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

  • Step Away From That Computer! You’re Not Qualified to Use It!

    - by Michael Sorens
    Most things tend to come with warnings and careful instructions these days, but sadly not one of the most ubiquitous appliances of all, your computer. If a chainsaw is missing its instructions, you’re well advised not to use it, even though you probably know roughly how it’s supposed to work. I confess, there are days when I feel the same way about computers. Long ago, during the renaissance of the computer age, it was possible to know everything about computers. But today, it is challenging to be fully knowledgeable even in one small area, and most people aren’t as savvy as they like to think. And, if I may borrow from Edwin Abbott Abbott’s classic Flatland, that includes me. And you. Need an example of what I mean? Take a look at almost any recent month’s batch of Windows updates. Just two quick questions for you: Do you need all of those updates? Is it safe to install all of those updates? I do software design and development for a living on Windows and the .NET platform, but I will be quite candid: I often have little clue what the heck some of those updates are going to do or why they are needed. So, if you do not know why they are needed or what they do, how do you know if they are safe? Of course, one can sidestep both questions by accepting Microsoft’s recommended Windows Update setting of “install updates automatically”. That leads you to infer that you need all of them (which is not always the case) and, more significantly, that they are safe. Quite safe. Ah, lest reality intrude upon such a pretty picture! Sadly, there is no such thing as risk-free software installation, and payloads from Windows Update are no exception. Earlier this year, a Windows Secrets Patch Watch article touted this headline: Keep this troublesome kernel update on hold. It discusses KB 2862330, a security update originally published more than 4 months earlier, and yet the article still recommends not installing it! Most people simply do not have the time, resources, or interest, to go about figuring out which updates to install or postpone or skip for safety reasons. Windows Secrets Patch Watch is the best service I have encountered for getting advice, but it is still no panacea and using the service effectively requires a degree of computer literacy that I still think is beyond a good number of people. Which brings us full circle: Step Away From That Computer! You’re Not Qualified to Use It!

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • One Year Oracle SocialChat - The Movie

    - by mprove
    Tweet | Like | Watch on Vimeo You’ve just watched – hopefully – my first short movie. Thank you! Here is a bit of the back stage story. About 6 weeks ago colleagues from SNBC (Social Network and Business Collaboration) announced a Social Use Case Competition. It was expected to submit a video of 2 to 5 minutes duration on the Social Enterprise (our internal phrase for Enterprise 2.0). Hmm – I had a few vague ideas, but no script – no actors – no experience in film making. Really the best conditions to try something! I chose our weekly SocialChats as my main topic. But if you don’t do Danish Dogma cinema, you still need a script. Hence I played around with the SocialChat’s archive, and all of a sudden a script and even the actors appeared in front of me. The words that you have just seen are weekly topics. Slightly abridged and rearranged to form a story. Exciting, next phase. How to get it on digital celluloid? I have to confess I am still impressed by epic. (Keep in mind, epic was done in 2004.) And my actors – words – call for a typographic style already. The main part was done over a weekend with Apple Keynote. And I even found a wonderful matching soundtrack among my albums: Didge Goes World by Delago. I picked parts of Second Day and Seventh Day. Literally, the rhythm was set, and I "just" had to complete the movie. Tools used – apart from trial and error: Keynote, Pixelmator, GarageBand, iMovie. Finally I want to mention that I am extremely thankful to BSC Music for granting permissions to use the tracks for this short film! Without this sound it would have been just an ordinary slide show. – Internal note: The next SocialChat is on Death by PowerPoint vs. Presentation Zen. CU this Friday 3pm Greenwich / 7am Pacific.

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • Which Windows 8 tool should I use to "read", "upload", my Windows 7 latest backup DVD (is it possible?)

    - by Robert
    Which Windows 8 tool should I use to "read", "upload", my Windows 7 latest backup DVD (is it possible?). I've just installed W8 and haven't made any changes to my new ecosystem and, what happened was that, as I was managing my new drivers, some mess* occurred, I confess, and now what I have left is every single backup tool I made use of W7, like system images, restore dvds, backup up to date monthly and so on, and would like to keep in touch with W8. I'm one of those with problems managing the amd switchable gpu drivers. Now I want to stay with W8 (download version - didn't clean install) but with my old personal files. I don't care to programs updates. I got everything original on dvds, of my interest. Yesterday I tried refreshing W8 once but didn't work. Maybe trying again tonight. What would you guys do in my place, please? *the mess I am talking about is to have disabled my intel (the only driver left) gpu in device manager tool in W8. I got black screen on system boot. Cheers, C.C.

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • PHP and Ruby: how to leverage both? and, is it worth it?

    - by dukeofgaming
    As you might have noticed from the title, this is not a "PHP or Ruby", or a "PHP vs. Ruby" question. This is a question on how to leverage PHP + Ruby in the same business. I myself am a PHP developer, I love the language because of its convenience and I specially love the ecosystem of resources that surround it: Joomla, Drupal, Wordpress, Symfony2, Doctrine2, etc. However, the language itself can be a little disappointing sometimes. OTOH, Ruby looks like a very beautiful language and —from studying it superficially in several aspects— I could say it is leaner than Python as a language per se. However, from what I've seen there is pretty much only RoR making noise, and I don't like RoR so much (mainly because its model layer). As Co-CEO and CTO at my company I'm trying to think outside of the box since I want to start to focus on the human side of technology and see if its sane to use both PHP and Ruby. Here are some random thoughts: Ruby folk seem to be generally better suited programmers than PHP folk (in terms of averages), I know the previous statement is somewhat baloney because very good and well architected PHP can be written, but I'd say the Ruby programmer culture is better than PHP's. The thing about Ruby is that it seems better suited for rapid development, I don't really know if this is only the case for RoR, but I do know that there are certain practices (perhaps not so good) like monkey patching that let business needs be satified quicker. From a marketing point of view (yep, sometimes you need to leverage the marketing BS for the sake of your company) Ruby seems better while PHP carries some stigmas. PHP 5.4 is bringing traits, and that is better/cleaner than mixins. That could really make PHP as lean as Ruby —or more— for certain stuff. Now, concretely, my questions: Would a PHP programmer want to learn Ruby?, I know I do, but conversely, would a Ruby programmer want to learn PHP?. What kinds of projects or situations would be better suited for Ruby that are not suited for PHP?. What is the actual ecosystem of Ruby?, aside from RoR, I have not seen other hyped technologies/frameworks (I've seen RSpec, but I confess being a total noob on what BDD really consists of and its implications). Supposing there are a certain type of projects ideal for Ruby, would there be a moment that its better to move it to PHP?. I know PHP can handle lots of stuff, but I've read that Ruby has its limitations when scaling (or is that RoR?, or is that baloney for both?). Finally and most importantly, would it be sane to maintain projects in two languages?, or is that just stupid. As I said, it looks like Ruby is leaner on the short term and that can make a project happen and succeed, but I'm not so sure about that on the long run. I'm looking for insights mainly from people that know well the strengths and weaknesses of the languages —preferably both of them— and Ruby's ecosystem in real practice, meaning: frameworks and applications like the ones I quoted from PHP's ecosystem. Best regards and thanks for your time.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Graph colouring algorithm: typical scheduling problem

    - by newba
    Hi, I'm training code problems like UvA and I have this one in which I have to, given a set of n exams and k students enrolled in the exams, find whether it is possible to schedule all exams in two time slots. Input Several test cases. Each one starts with a line containing 1 < n < 200 of different examinations to be scheduled. The 2nd line has the number of cases k in which there exist at least 1 student enrolled in 2 examinations. Then, k lines will follow, each containing 2 numbers that specify the pair of examinations for each case above. (An input with n = 0 will means end of the input and is not to be processed). Output: You have to decide whether the examination plan is possible or not for 2 time slots. Example: Input: 3 3 0 1 1 2 2 0 9 8 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 Ouput: NOT POSSIBLE. POSSIBLE. I think the general approach is graph colouring, but I'm really a newb and I may confess that I had some trouble understanding the problem. Anyway, I'm trying to do it and then submit it. Could someone please help me doing some code for this problem? I will have to handle and understand this algo now in order to use it later, over and over. I prefer C or C++, but if you want, Java is fine to me ;) Thanks in advance

    Read the article

  • jqModal/JQuery problem, div not updating with new content?

    - by echoesofspring
    I'm hoping someone can point a relative jQuery/jqModal newbie in the right direction for debugging this error. I'm loading an html fragment into a div and then use jqModal to display that div as a modal dialog. The problem is that the div is displayed but not with my updated html. I'm showing my jqModal dialog in the response from a jquery call, function foo is called from an onclick event: function foo(url) { $.ajax({ type: "GET", url: url, success: function(msg) { $('#ajaxmodal').html(msg); $('#ajaxmodal').jqmShow(); } }); } ajaxmodal is a simple div. Initially I thought the problem must be in the html snippet (msg) I'm passing to the callback, but I don't think that's it, I get the err (see below) even when I comment out the $('#ajaxmodal').html(msg) line or pass it hardcode html. I think I have jqModal configured correctly, other calls using our ajaxmodal div work correctly, I'm able to display the modal, update the content based the server response, etc. When I try to debug in firebug, I get the following error following the call to .jqmShow(). I have seen the err on occasion in other places when it seemed maybe the page hadn't loaded yet, and I confess I'm confused about that, since we've wrapped our jqModal selectors in a $(document).ready() call, so maybe I have a larger issue that this call just happens to trigger? From the jquery.jqModal.js file, line 64: js err is $(':input:visible',h.w)[0] is undefined in the line: f=function(h){try{$(':input:visible',h.w)[0].focus();}catch(_){}} When I step through this in firefox, h.w[0] seems ok, it references our '#ajaxmodal' div. Thanks in advance for any suggestions in tracking this down?

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Core Data - Entity Relationships Not Working as expected

    - by slimms
    I have set up my data model in xcode like so EntityA AttA1 AttA2 EntityB AttB1 AttB2 AttB3 I then set up the relationships EntityA Name: rlpToEntityB Destination: EntityB Inverse: rlpToEntityA To Many: Checked EntityB Name: rlpToEntityA Destination: EntityA Inverse: rlpToEntityB To Many: UnChecked i.e. relationship between the two where Each one of EntityA can have many EntityB's It is my understanding that if i fetch a subset of EntityB's I can then retrieve the values for the related EntityA's. I have this working so that i can retrieve the EntityB values using NSManagedObject *objMO = [fetchedResultsController objectAtIndexPath:indexPath]; strValueFromEntityB = [objMO valueForKey:@"AttB1"]; However, if I try to retrieve a related value from EntityA by doing the following strValueFromEntityA = [objMO valueForKey:@"AttA1"]; I get the error "The entity EntityB is not Key value coding-compliant for the key Atta1" Not surprisingly i suppose if i switch things around to fetch from EntityA i cannot access attributes of EntityB So it appears the defined relationshipare being ignored. Can anyone spot what i am doing wrong? I confess im very new to iPhone programming and especially to Core Data so please go easy on me and provide verbose explanations or point me in the direction a specific resource. I have downloaded the apple sample apps (Core Data Books, Top Songs and recipes) but I still can't work this out. Thanks in advance, Nev.

    Read the article

  • Mercurial Hg Clone fails on C# project with GUID

    - by AnneTheAgile
    UPDATE: In trying to replicate this problem one more time to answer your questions I could not! I can only conclude that my initial setup of Mercurial was problematic and/or possibly I was trying to checkin a build that failed compilation before the checkin. Sigh! Thank you so very much for your help. I gave credit for the help on how to do a script. I need to try that for general purposes. hi all, I hope you can help me :). I am trying to see if Mercurial would be a good DCVS for my project at work, and I'm surely a newbie to many things. We have a fairly large codebase in C# (Dotnet3.0 not 3.5 , WindowsXP) and it utilizes the GUID feature. I confess to know little about how or why we use the GUID, but I do know that I cannot touch it. So, when I try hg clone, it fails unless I change the GUID in the cloned directory (ie create new GUID in Visual Studio and then paste that new GUID to replace the old one). To me, this completely defeats the purpose and utility of quick easy clones. It also makes difficult all the many workflows that require multiple clones. Is there a workaround, or is there something I'm doing wrong? How can I simplify and/or remove this problem? Would Bazaar make this easier? Thank you!

    Read the article

  • Help understanding .NET delegates, events, and eventhandlers

    - by Seth Spearman
    Hello, In the last couple of days I asked a couple of questions about delegates HERE and HERE. I confess...I don't really understand delegates. And I REALLY REALLY REALLY want to understand and master them. (I can define them--type safe function pointers--but since I have little experience with C type languages it is not really helpful.) Can anyone recommend some online resource(s) that will explain delegates in a way that presumes nothing? This is one of those moments where I suspect that VB actually handicaps me because it does some wiring for me behind the scenes. The ideal resource would just explain what delegates are, without reference to anything else like (events and eventhandlers), would show me how all everything is wired up, explain (as I just learned) that delegates are types and what makes them unique as a type (perhaps using a little ildasm magic)). That foundation would then expand to explain how delegates are related to events and eventhandlers which would need a pretty good explanation in there own right. Finally this resource could tie it all together using real examples and explain what wiring DOES happen automatically by the compiler, how to use them, etc. And, oh yeah, when you should and should not use delegates, in other words, downsides and alternatives to using delegates. What say ye? Can any of you point me to resource(s) that can help me begin my journey to mastery? EDIT One last thing. The ideal resource will explain how you can and cannot use delegates in an interface declaration. That is something that really tripped me up. Thanks for your help. Seth

    Read the article

  • How to exit grid with ctrl-TAB when grid is on a tabpage (onkeydown works when grid not on tabpage)

    - by Charles Hankey
    winforms .net 3.5 Ultrawingrid 9.2 In my subclass of Ultrawingrid.Ultragrid : Protected Overrides Sub OnKeyDown(ByVal e As System.Windows.Forms.KeyEventArgs) If e.KeyCode = Windows.Forms.Keys.Tab andalso e.control = True then SetFocusToNextControl(True) End if Mybase.OnKeyDown(e) End Sub This works fine. But when the grid is dropped on a TabControl tabpage, the ctrl-tab looks very different to the sub above. e.keycode is seen as controlkey {17} I realize that by default cntrl-Tab moves between tabpages. I need to override this behavior. My thought is I probably need a subclass of the tabControl which will pass the keycombo through just as the form does but I confess to being clueless as to how to accomplish that. I tried to override the onkeydown of a tabcontrol subclass and just issuing a return and not and base call to onkeydown if the ctrl-tab combo was pressed but it seemed to see the e.keycode as controlkey as well. FWIW I tried a different combination like ctrl-E and got pretty much the same result with focus disappearing from the grid but not going anywhere I could detect. The sub still saw the e.control as controlkey. Oddly, ctrl-X, ctrl-A etc all work in the grid and a ctrl-Delete combo I put in the subclass for deleting a row works fine. Once again - grid directly on form and it all works. I'm definitely over my head on this one. Guidance much appreciated. vb or c# fine. TIA

    Read the article

  • Intellisense for custom config section problem with namespaces

    - by Quick Joe Smith
    I have just rolled a custom configuration section, created an accompanying schema document for Intellisense and added it to the Web.config's Schemas property as per Michael Stum's answer to another similar question. Unfortunately, and possibly due to me creating the XSD by hand with limited knowledge, the Intellisense relies on an xmlns attribute pointing to my XSD file's namespace being present in the custom config element. However, when running the project I get an Unrecognized attribute 'xmlns'. Note that attribute names are case-sensitive error. I could probably just modify my XSD file to define the xmlns attribute for that element, however I am wondering if this is just a bandaid fix to a larger problem. I must confess I don't have a very good understanding of XML namespaces so this might be an oppportunity to set me straight on a few things. Here is the attributes for my XSD file's root xs:schema element: <xs:schema id="awesomeConfig" targetNamespace="http://awesome.com/schemas" xmlns="http://awesome.com/schemas" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> ... </xs:schema> And on creating the element in the Web.config file, Visual Studio 2008 automatically appends: <awesomeConfig xmlns="http://awesome.com/schemas"></awesomeConfig> So have I misunderstood the meaning of the xs:schema attributes at all, or is the proper solution as simple as it seems?

    Read the article

< Previous Page | 1 2 3  | Next Page >