Search Results

Search found 255 results on 11 pages for 'charging'.

Page 8/11 | < Previous Page | 4 5 6 7 8 9 10 11  | Next Page >

  • Did Oracle make public any plans to charge for JDK in the near future? [closed]

    - by Eduard Florinescu
    I recently read an article: Twelve Disaster Scenarios Which Could Damage the Technology Industry which mentioned among other the possible "disaster scenarios" also: Oracle starts charging for the JDK, giving the following as argument: Oracle could start requiring license fees for the JDK from everyone but desktop users who haven't uninstalled the Java plug-in for some reason. This would burn down half the Java server-side market, but allow Oracle to fully monetize its acquisitions and investments. [...] Oracle tends to destroy markets to create products it can fully monetize. Even if you're not a Java developer, this would have a ripple effect throughout the market. [...] I actually haven't figured out why Larry hasn't decided Java should go this route yet. Some version of this scenario is actually in my company's statement of risks. I know guessing for the future is impossible, and speculating about that would be endless so I will try to frame my question in an objective answarable way: Did Oracle or someone from Oracle under anonymity, make public, or hinted, leaked to the public such a possibility or the above is plain journalistic speculation? I am unable to find the answer myself with Google generating a lot of noise by searching JDK.

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • What technology or skillset should I learn today in order to be able to charge $250+ / hr in 2-3 years? [closed]

    - by Ryan Waggoner
    I've been doing PHP freelance development for the last 4-5 years and I'm starting to max out my hourly rate. So in 2010 I decided to transition to a new language. I played with Python and Ruby, but ended up settling on iOS, for three reasons: I'm enjoying the challenge of working on a completely different type of development, instead of another flavor of web development The demand seems higher right now than for Ruby or Python I see iOS developers charging $150 - 250 / hr Whether these reasons are right or wrong, I've been learning iOS for the last year and I'm starting to get more work in that field. I feel confident that in six months (barring any major shifts in the ecosystem), I can be billing iOS work at $150 / hr or more. However, I'm feeling that I should have done this earlier, that I've missed the boat, and that iOS development is going to dry up or get much more commoditized. Whether this is true or not isn't really my question (though feel free to comment). What I want to know is: what should I start learning right now so that I can be ahead of the curve in a couple years when the demand is far outstripping supply? What technologies or skillsets are going to be so heavily in demand in 2-3 years that you'll be able to charge $250 / hr or more and stay busy? These don't have to be new technologies either...the answer could be iOS or COBOL or whatever.

    Read the article

  • How do I mount my Android phone?

    - by Amanda
    I'm puzzled because my phone used to just appear when I plugged it in. It doesn't anymore and The development options are definitely set to allow USB debugging. The phone is charging via USB but doesn't appear in lsusb [0 amanda@luna android-sdk-linux_86]$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 17ef:4807 Lenovo UVC Camera Bus 003 Device 012: ID 413c:1003 Dell Computer Corp. Keyboard Hub Bus 003 Device 003: ID 08ff:2810 AuthenTec, Inc. AES2810 Bus 003 Device 013: ID 413c:2010 Dell Computer Corp. Keyboard Bus 003 Device 014: ID 046d:c001 Logitech, Inc. N48/M-BB48 [FirstMouse Plus] adb devices -l shows nothing. In my Wireless and Network settings I changed the USB connection settings to "Mass storage" -- they were set to "Ask on connection" though I definitely wasn't getting asked. I don't get any Click here to connect via USB alert either. I'm not even sure whether the issue is my phone or my computer. It seems odd that it isn't even appearing in lsusb Not for nothing, the thumb drive on my keyring also does not appear in lsusb -- I've tried both in a bunch of different ports. I kind of assume the thumb drive is just borked, but it could be my OS.

    Read the article

  • My site has crashed .. anyone have some info ?

    - by marwan
    Hi all , I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks

    Read the article

  • CA For A Large Intranet

    - by Tim Post
    I'm managing what has become a very large intranet (over 100 different hosts / services) and will be stepping down from my role in the near future. I want to make things easy for the next victim person who takes my place. All hosts are secured via SSL. This includes various portals, wikis, data entry systems, HR systems and other sensitive things. We're using self signed certificates which worked o.k. in the past, but are now problematic because: Browsers make it harder for users to understand exactly what is going on when a self signed certificate is encountered, much less accept them. Putting up a new host means 100 phone calls asking what "Add an exception" means What we were doing is just importing the self signed certs when we set up a new workstation. This was fine when we only had a dozen to deal with, but now its just overwhelming. Our I.T. Department has classified this as ya all's problem, all we get from them is support for switch and router configurations. Beyond the user having connectivity, everything else is up to the intranet administrators. We have a mix of Ubuntu and Windows workstations. We'd like to set up our own self signed CA root, which can sign certificates for each host that we deploy on the intranet. Client browsers would of course be told to trust our CA. My question is, would this be dangerous and would we be better off going with intermediate certificates from someone like Verisign? Either way, I still have to import the root for the intermediate CA, so I really don't see what the difference is? Other than charging us money, what would Verisign be doing that we could not, beyond protecting the root CA cert so it can't be used to make forgeries?

    Read the article

  • How should I describe the process of learning someone else's code? (In an invoicing situation.)

    - by MattyG
    I have a contract to upgrade some in-house software for a large company. The company has requested multiple feature additions and a few bug fixes. This is my first freelance style job. First, I needed to become familiar with how the application worked - I learnt it as if I was a user. Next, I had to learn how the software worked. I started with broad concepts, and then narrowed down into necessary detail before working on each bug fix and feature. At least at the start of the project, it took me a lot longer to learn the existing code than it did to write the additional features. How can I describe the process of learning the existing code on the invoice? (This part of the company usually does things in-house, so doesn't have much experience dealing with software contractors like me, and I fear they may not understand the overhead of learning someone else's code). I don't want to just tack the learning time onto the actual feature upgrade, because in some cases this would make a 'simple task' look like it took me way too long. I want break the invoice into relevant steps, and communicate that I'm charging for the large overhead of learning someone else's code before being able to add my own to it. Is there a standard way of describing this sort of activity when billing for a job?

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • The battery indicator in Unity panel not showing up

    - by user61415
    I installed Ubuntu 12.04 with Wubi. Well after being completely dazzled with the amount of free content in the Software Centre, I decided to go deeper and start messing with settings. Well after changing the screen brightness the highest level I noticed that there wasn't an indicator for how much battery was left in my laptop. I looked up online on got 2 suggestions on how to fix: Right click on the Unity panel and add an indicator Set it to show in the power settings menu. Well I did both when I right click at the top menu nothing comes up and setting it to show does nothing either. Then I tried installing something in the Software Centre. I got something but when I activated it it said I had 0% power left even though I was charging and at %100 according the Light in the front of my laptop. So now I'm thinking that it doesn't even recognise my computer as a laptop which is weird because in the display settings it says my screen size is set to laptop. How can I install it? I don't know what version it is other then Ubuntu 12.04 and no matter what the icon does not appear with the

    Read the article

  • Dim (NEARLY blank) laptop screen, secondary screen works - why?

    - by LIttle Ancient Forest Kami
    My laptop screen is (almost) black while my secondary screen is fine. I believe it to be backlight / brightness related. Problem description it starts when I start the laptop system loads and works fine, just screen has problems I can see the screen though very faintly / dimly - it's hard to see anything which ain't very white e.g. starting screen has big Thinkpad logo in white, large font - I can see it, though very dimly second screen works very well Official backligtht debugging: using acpi setting as prescribed there for Thinkpads didn't help I can see an entry in /sys/class/backlight/ and it changes when I press hotkeys for brightness (current backlight power for instance goes up or down) acpi-off didn't helpm neither did acpi_backlight=vendor Hardware data Laptop is Thinkpad Edge with glossy screen. 4 processors, 2 cores, exemplary CPU data from cat /proc/cpuinfo reports Genuine Intel i5 (M 480 @ 2.67GHz). OS is Ubuntu Lucid, 10.04 LTS, 64-bit, with Linux generic kernel (2.6.32-44) and GNOME 2.32.2 (though I doubt there lies the problem). $ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc M92 [Mobility Radeon HD 4500 Series] $ lshw -C display *-display description: VGA compatible controller product: M92 [Mobility Radeon HD 4500 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:33 memory:c0000000-dfffffff(prefetchable) ioport:2000(size=256) memory:f0300000-f030ffff memory:f0320000-f033ffff(prefetchable) Driver I was NOT running any proprietary drivers, just checked with "Hardware drivers". There is one for ATI that is suggested there, though I didn't need it so far. UPDATE: changing the driver to proprietary one (ATI/AMD FGLRX) didn't help. Tried and failed Resetting / running on power or battery / charging / getting rid of static electricity / warming up *doesn't help* This is NOT a blank-screen problem, at least it isn't following official Ubuntu black-screen diagnostics - I can see my screen, though barely. What I will try next: - check last updates I've made - IIRC I am running on nomodeset already, but will verify this Any ideas how to proceed best? What is most probable cause?

    Read the article

  • How to track humans, but not spiders?

    - by johnnietheblack
    I am writing a nice little PHP/MySQL/JS ad system for my site - charging by impression. Therefore, my clients would like to be sure that they aren't paying for impressions caused by robots / spiders / etc. Is there a decently effective way to decipher between human and robot, without doing something obstructive like a captcha, or requesting "no index" or something? Basically, is there some sort of "name tag" that legit spiders have on so that my site knows their presence, therefore letting me react accordingly? And, if there isn't...how do ad companies like Google defend against this?

    Read the article

  • developing iphone apps on windows is it worth the hassel

    - by kalpaitch
    I'm only after a simple solution and won't be developing anything particularly complex. But I'm wondering whether the hassals of developing an iPhone app NOT on MacOS are really that significant to avoid giving it a shot. Bearing in mind that I do have access to a mac every now and again. So I would be able to compile it using the official Apple supported SDK, but I just want to be able to develop it in my own environment (windows laptop). I heard someone mention a while ago that there are various objective C compilers that allow writing code in various other web technologies as well. Are these really valid. And am I alone in thinking Apple's whole attitude towards this is totally imoral. Charging $200 for the privelege of having your app unequivocally rejected etc etc and then not being allowed to look directly at Steve Jobs or his golden retrievers.

    Read the article

  • Good memory profiling, leak and error detection for Windows

    - by Fernando
    I'm currently looking for a good memory / leak detection tool for Windows. A few years ago, I used Numega's Boundschecker, which was VERY good. Right now it seems to have been sold to Compuware, which apparently sold it again to some other company. Trying to evaluate a demo of the current version has been so far very frustrating, in the best "enterprisy" tradition: (a) no advertised prices on their website (Great Red Flashing Lights of Warning); (b) contact form asked for number of employeers and other private information; (c) no response to my emails asking for a evaluation and price. I had to conclude that BoundsChecker is now one of "those" products. Y'know, the type where you innocently call and tomorrow 3 men in black suits turn up at your building wanting to talk to you about "partnerships" and not-so-secretly gauge the size of your company and therefore how much they can get away with charging you. SO, rant aside, can anyone recommend an excellent memory checking/leak detection tool, how much it costs, and suggestions for where to buy?

    Read the article

  • Paypal API for preapproved payments , is the merchant charged for pre-approved transactions ?

    - by Michael
    I'm running an classified ads website and I'm charging a specific fee for each ad placed. As you may know paypal charges a specific percent for each transaction + a specific amount fee (e.g 2,9% + 0.30 cents) . I have customers who place about 30 ads per month therefore I would like to integrate a schema that would cut the specific amount fee . Basically I'm looking to make "pre-approve" call every time after the client place an ad to make sure that he has the money to pay and at the end of the month to cancel all the scheduled pre-approved transactions and make a single payment request with the whole amount. The question that I have is : Will I be charged for the pre-approved transactions that I cancel ?

    Read the article

  • One SVN repository or many?

    - by nickf
    If you have multiple, unrelated projects, is it a good idea to put them in the same repository? myRepo/projectA/trunk myRepo/projectA/tags myRepo/projectA/branches myRepo/projectB/trunk myRepo/projectB/tags myRepo/projectB/branches or would you create new repositories for each? myRepoA/trunk myRepoA/tags myRepoA/branches myRepoB/trunk myRepoB/tags myRepoB/branches What are the pros and cons of each? All that I can currently think of is that you get mixed revision numbers (so what?), and that you can't use svn:externals unless the repository is actually external. (i think?) The reason I ask is because I'm considering consolidating my multiple repos into one, since my SVN host has started charging per repo.

    Read the article

  • Catch/Raise event on table data update C#

    - by Incognito
    Hi, I have 24/7 service which keeps setup (configuration data) for charging, routing and etc in the Sql Server. Once it is started it loads the data from table using Linq2SQL and use the data through all the application. And we need a solution to update the setup data in the table without restarting the application. So I am interested is it possible to catch/determine that the table was updated so I can refresh the setup data in the application. I mean is it possible to have events which will raise when there is any delete, update or insert on the table. Thank you.

    Read the article

  • Enhance GIMP’s Image Editing Power with Gimp Paint Studio

    - by Asian Angel
    Does your GIMP installation need a little super-charging? Using Gimp Paint Studio you can add a wonderful set of brushes, tools, and more to GIMP and take your work up to the next level. For our example we chose to install the beta version of Gimp Paint Studio on Ubuntu 10.10. Once you download the .zip file and unzip it, all that you need to do is manually transfer the contents shown here to the appropriate GIMP folders on your system. You can see the location of the destination folders here on our system… Note: Make certain to make a back-up copy of the “sessionrc and toolrc files” before you transfer Gimp Paint Studio into your installation (in case you would like to or need to revert back to the originals later). When you finish transferring the files start GIMP up and get ready to have fun. And if your experience is like ours then you should see a noticeable difference in window size and arrangement from the default settings. Here are some samples of the exceptional artwork done by Ramon Miranda and Mozart Couto using Gimp Paint Studio. Really impressive! Artwork by Ramon Miranda & Mozart Couto. Watch the introduction video and see Gimp Paint Studio in action. Download Gimp Paint Studio for Linux, Windows, and Mac [Gimp Paint Studio Homepage] *Keep in mind that there are stable and beta releases available, so choose the version that you are most comfortable with using. View the Installation Guides for Gimp Paint Studio *Page contains wonderful “video and written” versions for adding/installing Gimp Paint Studio to your system. Gimp Paint Studio Video Tutorials Library Visit the Gimp Paint Studio Gallery Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Nokia vs. The World

    - by Michael B. McLaughlin
    I’m looking forward to the launch of the Nokia Lumia 920. Why? Well, it stacks up better than the competition for one thing. Then there’s also that security problem that certain other phones have. Mostly, though, it’s because I love my Lumia 900 and the 920, with Windows Phone 8, will be even better. Before I got my Lumia 900, I just took it as given that smart phone cameras couldn’t be good. The Lumia taught me that smart phone cameras can be good if the manufacturer treats them as an important component worth spending time and money on (rather than some thing that consumers expect such that they’d better throw one in). I’m extremely pleased with the quality of pictures that my Lumia 900 gives me as well as the range of settings it provides (you can delve in to tell it a film speed, an f-stop, and a whole range of other settings). And the image stabilization features in the Lumia 920 deliver far better results than the others. Nokia has had great maps for a long time and they continue to improve. Even better, they made a deal that puts many of their excellent maps into Windows Phone 8 itself. There are still Nokia-exclusive features such as Nokia City Lens, of course. But by giving the core OS a great set of fundamental map data and technologies, they help ensure that customers know that buying a Windows Phone 8 will give them a great map experience no matter who made the phone. I’ll be getting a 920, myself, but the HTC and Samsung devices that have been announced have some compelling features, too, and it’s great to know that people who buy one of these won’t need to worry about where their maps might lead them. I’m looking forward to the NFC capabilities and Qi wireless charging my Lumia 920 will have. With the availability of DirectX and C++ programming on Windows Phone 8, I’m also excited about all the great games that will be added to the Windows Phone environment. I love my Xbox Phone. I love my Office phone. I love my Facebook phone. I love my GPS phone. I love my camera phone. I love my SkyDrive phone. In short, I love my Windows Phone!

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Ubuntu 12.04.3 Graphics Issues: Broken Pipes, Reinstalled Xorg and Bumblebee

    - by user190488
    It seems I have a problem, and am only making it worse by following what I find online. I have a new Asus N550JV-D71 (not sure about the part after the dash, though I definitely know it includes 71). I decided to downgrade Windows 8 to 7, then dual boot Ubuntu 12.04 with it (there were issues with Windows 8, and I had a Windows 7 disk handy). It did work and, after installing Bumblebee in tty (because it wouldn't boot when it was first installed), it worked marvelously for a little less than a week. However, I restarted it last night and got the Could not write bytes: Broken pipes error. (I see it's a very common error, but I've looked at the majority of the suggested Similar Questions already.) I followed what I could find online, followed those instructions (making sure to not install any sort of graphics drivers other than what Bumblebee provides), and it just seems to go further and further downhill. I'm afraid I didn't write the exact steps to get to this point (it was late by the time I gave up the night before), but it involved reinstalling lightdm, xorg (and xserver?), and Bumblebee. I then changed the Bumblebee.conf file so that Device=nvidia. I'm pretty new to Linux in general (I've used it since 10.04, but I hadn't had issues up until this computer, so it let me stay a newbie), so I'm not exactly sure what log files to look at to find the errors to look up. However, I did look at lshw and noticed that displays was marked as unassigned. Also, if I try to start lightdm using the command line, it always stops at Stopping Mount network filesystems. I should note that there isn't an xorg.conf file, and no .Xauthority. I would really, really prefer not to reinstall 12.04 if possible. I managed to get grub to display only a short time ago, and I can't boot to the dvd drive unless I go into the BIOS settings and manually change the boot order (that was an issue from the beginning, before the Ubuntu install), and getting into those settings often means rebooting several times due to the fact that the window to get to it is extremely small. I have most of what I need backed up, however, in case it does get to that point. If I really have to, I can just use the latest Ubuntu version instead of the LTS, but the reason I chose 12.04 in the first place is because I need something stable-ish, and Windows isn't suitable to what I need to do. I should note that the reason I restarted last night in the first place was that it wasn't charging the battery, and the wifi kept on going out. Hardware: Nvidia GeForce GT 750M Intel HD graphics 4600

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11  | Next Page >