Search Results

Search found 922 results on 37 pages for 'patrick klingemann'.

Page 5/37 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to install Ubiquity into a Live CD installation image?

    - by Patrick L
    I am trying a create a small Ubuntu installation ISO image. I am using a tool called Ubuntu-Builder. To make the final ISO as small as possible, I have decided to use Ubuntu Mini Remix. It is a small Live CD without GUI. It does not come with any installer software like Ubiquity. I want to embed an installer software into the ISO image so that user can install it into harddisk. In Ubuntu-Builder, I have tried the following: Install LXDE Desktop, then install Ubiquity. But the final ISO boots into command line. Install OpenBox Desktop, then install Ubiquity. But the final ISO boots into command line. Do not install DE, directly install Ubiquity. But the final ISO still boots into command line. After booting up from ISO, I have checked the software in the OS. It seems that Ubiquity has been installed. But it didn't show up when I boot the ISO image. Anyone knows how to install Ubiquity into a Live CD ISO image? Anyone knows any text mode installer which can replace Ubiquity?

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • SharePoint 2010 Launch

    - by Patrick Olurotimi Ige
    Some great news for sharepoint developers,architect,consultants etc... May 12, 2010 is the official release date for SharePoint 2010 & Office 2010. Also, Microsoft announced their intent to RTM (Release to Manufacturing) for April 2010. read more here

    Read the article

  • rsnapshot for remote backups...

    - by Patrick
    I want to use rsnapshot to make backups from my production server to a remote backups server. Should I install rsnapshot on the remote backup server and not the production one, right ? rsnapshot is going to pull the files to backup from the production server and store them locally on the backup server ? I've just realized that I don't have sudo privilegies on the backup server. Does this mean I cannot use rsnapshot for remote backups ? thanks

    Read the article

  • Is it just me or is this a baffling tech interview question

    - by Matthew Patrick Cashatt
    Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. Final Answer: CLOWN TRAVERSAL!!! (See @Matt's answer below) Thanks! Matt

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • I can't get Gutenprint to install with 13.10

    - by Patrick
    Canon pixma ip4200 printer... the install printer dialog offers gutenprint drivers then stalls out when I try to install them. Ran alien to unload Canons Linux drivers for the printeraqnd got this message: patrickxxxAspire-5253:~/Downloads$ sudo alien cnijfilter-common-2.60-4.src.rpm cnijfilter-common-2.60-4.src.rpm is for architecture i386 ; the package cannot be built on this system What am I to do???

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • Stupid Geek Tricks: 6 Ways to Open Windows Task Manager

    - by Patrick Bisch
    Bringing up Windows Task Manager is not much of a task itself, but when a virus disables Ctrl+Alt+Del and takes it hostage, how else are you going to open task manager? Or maybe you’re just looking for some diversity in your life, so here are 6 different ways to open Windows Task Manager.HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

  • Automatizing the backup of my databases and files with cron

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach? 2) It is not clear how to specify the repository and the working copy with svn. 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

  • coding after a couple beers

    - by Patrick
    Sometimes after work I'll come home and have a beer or two and I've found that once I have a beer in me, my desire to code drops precipitously. I'm not talking about getting hammered or anything, but I can't seem to get up the gumption to do any coding. I'm still fine to do other things, i.e.- paying bills, playing games, reading, etc, it's just coding. I know that some people prefer to code with some beer in them. Is this normal, or do I need to practice coding under the influence so if the need ever arises, I'm ready for it. (I will only be coding on pet projects, nothing serious while CUI)

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Why does DataContractJsonSerializer not include generic like JavaScriptSerializer?

    - by Patrick Magee
    So the JavaScriptSerializer was deprecated in favor of the DataContractJsonSerializer. var client = new WebClient(); var json = await client.DownloadStringTaskAsync(url); // http://example.com/api/people/1 // Deprecated, but clean looking and generally fits in nicely with // other code in my app domain that makes use of generics var serializer = new JavaScriptSerializer(); Person p = serializer.Deserialize<Person>(json); // Now have to make use of ugly typeof to get the Type when I // already know the Type at compile type. Why no Generic type T? var serializer = new DataContractJsonSerializer(typeof(Person)); Person p = serializer.ReadObject(json) as Person; The JavaScriptSerializer is nice and allows you to deserialize using a type of T generic in the function name. Understandably, it's been deprecated for good reason, with the DataContractJsonSerializer, you can decorate your Type to be deserialized with various things so it isn't so brittle like the JavaScriptSerializer, for example [DataMember(name = "personName")] public string Name { get; set; } Is there a particular reason why they decided to only allow users to pass in the Type? Type type = typeof(Person); var serializer = new DataContractJsonSerializer(type); Person p = serializer.ReadObject(json) as Person; Why not this? var serializer = new DataContractJsonSerializer(); Person p = serializer.ReadObject<Person>(json); They can still use reflection with the DataContract decorated attributes based on the T that I've specified on the .ReadObject<T>(json)

    Read the article

  • I want to consolidate two sites into a third. Will my search engine rankings be penalized if I rewrite and redirect pages one by one?

    - by Patrick Kenny
    I have two Drupal sites with different content-- let's call them Apple and Orange. I recently developed a much more sophisticated third Drupal site-- let's call it Tree. For a large number of reasons, the content on Apple and Orange is useful for the users of Tree, so I want to move the content to Tree. However, much of the content is out of date. (This whole process took about five years.) To update the content, I will rewrite it one article at a time myself. Now here's my question: if I move the articles one by one (as I rewrite them) and then redirect the old articles (using a 301 redirect) on Apple/Orange to the new site on Tree, will this have a huge negative effect on my search engine rankings? Is there a good way to redirect among sites when they merge like this, or would I be better off keeping the old articles on Apple/Orange and simply linking them to the new, rewritten articles on Tree?

    Read the article

  • Kernel error during upgrade due to "/etc/default/grub: Syntax error: newline unexpected"

    - by Patrick - Developer
    Summary: linux-image-3.5.0-2-generic upgrade to linux-image-3.5.0-3-generic The default Ubuntu 12.04 update is generating the following error for weeks (the link below). Obs.: I'm using default update of Ubuntu 12.04 ie, apt-get update. log error: https://gist.github.com/3036775 Overall he is trying to do the following: upgrade the "linux-image-3.5.0-2-generic upgrade to linux-image-3.5.0-3-generic" and the error always, always. What to do?

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • Live Virtual Class for Partners: Application Management

    - by Patrick Rood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} November 11-12th Manageability Partner Community Application Management Suite Live Virtual Training This training will be offered to Oracle Partners over a live webcast during business hours. Each day will consist of approximately 2-3 hours of lecture/demos. It will be recorded and available for playback. Purpose: This virtual course is a comprehensive program of training sessions, prepared and presented by Product Managers. This ensures you have all the information you need to position and sell Oracle Application Management Suites. The sessions will be lecture based with demonstrations to complement. These sessions are interactive and everyone will be required to participate. Customer case studies will be used as appropriate and there will be plenty of opportunity for in-depth discussion. Please bring to the training an understanding of what Enterprise Manager 12c does for our customers, along with your own experiences to date. Logistics: Topic: Oracle Application Management Suite Training (2 Days - approx 2-3 Hour per Day) WebEx session details to be provided upon registration. Monday 11th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) Tuesday 12th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) (Back to the top) Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Can a NodeJS webserver handle multiple hostnames on the same IP?

    - by Matthew Patrick Cashatt
    I have just begun learning NodeJS and LOVE it so far. I have set up a Linux box to run it and, in learning to use the event-driven model, I am curious if I can use a common IP for multiple domain names. Could I point, for example, www.websiteA.com, www.websiteB.com, and www.websiteC.com all to the same IP (node webserver) and then route to the appropriate source files based on the request? Would this cause certain doom when it came to scaling to any reasonable size?

    Read the article

  • Transitioning from a mechanical engineer to software developer. What path to take?

    - by Patrick
    Hi all, I am 24 years old and have a BS in mechanical engineering from Mizzou. I have been working as a mechanical engineer in a big corporation for about 2 years since graduation. In this time, I have decided that I'm just not that interested in mechanical engineering topics, and I'm much more excited about web technology, website design, etc. I do a very small amount of freelance web design just to get experience, but I have no formal training yet. Ultimately I could see myself being very excited about working for a small startup in web technology, or creating and selling my own programs in more of an entrepreneurial role. What path do you recommend for transitioning into this field? Go back to school and get a BS in CS? MS in CS? Tech school for classes? Self study? I'm feeling overwhelmed by the options available, but the one thing I know for sure is that I'm excited to make a change. Thanks!

    Read the article

  • Is it common to purchase an insurance policy for contract development work?

    - by Matthew Patrick Cashatt
    I am not sure if this is the best place for the question, but I am not sure where else to ask. Background I am a contract developer and have just been asked to provide a general liability policy for my next gig. In 6-7 years this has never been asked of me. Question Is this common? If so, can anyone recommend a good underwriter that focuses on what we do as contract software developers? I realize that Google could help me find underwriters but it won't give me unbiased public opinion about which companies actually understand what we do and factor that into the price of the policy. Thanks, Matt

    Read the article

  • Are there plans for handwriting recognition?

    - by Patrick
    This is a big feature when it comes to putting Ubuntu onto tablets. Currently, Netbook edition works great for that purpose and the pen digitiser is perfect, but the handwriting would be a real dealmaker (especially for my business - we could actually move to Linux) to compete with the Windows one. CellWriter exists, but that only handles character and keyboard input (but I don't know about multitouch on the keyboard). It also needs to handle print and cursive, because character mode can be slow and uncomfortable (unless you're writing passwords). Lastly, CellWriter needs to have some default letter shapes rather than having to be trained from the start. There is a software package called MyScript (by Vision Objects) that handles all four modes (keyboard, character, print, cursive) plus calculator and fullscreen, but it's only free as a trial. Still, it would be nice to see it in the For Purchase section and the trial in the free section of the Software Centre. The only other ones are for Chinese/Japanese/Korean characters. What would really make a difference for us is the integration of some formal API with the OS that can automatically activate when running on a tablet to pass ink data to whatever recognition system is installed, and have something available (however rudimentary) to use it.

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >