Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 391/2673 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • How to dealing with the "programming blowhard"?

    - by Peter G.
    (Repost, I posted this in the wrong section before, sorry) So I'm sure everyone has run into this person at one point or another, someone catches wind of your project or idea and initially shows some interest. You get to talking about some of your methods and usually around this time they interject stating how you should use method X instead, or just use library Y. But not as a friendly suggestion, but bordering on a commandment. Often repeating the same advice over and over like a overzealous parrot. Personally, I like to reinvent the wheel when I'm learning, or even just for fun, even if it turns out worse than what's been done before. But this person apparently cannot fathom recreating ANY utility for such purposes, or possibly try something that doesn't strictly follow traditional OOP practices, and will settle for nothing except their sense of perfection, and thus naturally heave their criticism sludge down my ears full force. To top it off, they eventually start justifying their advice (retardation) by listing all the incredibly complex things they've coded single-handedly (usually along the lines of "trust me, I've made/used program X for a long time, blah blah blah"). Now, I'm far from being a programming master, I'm probably not even that good, and as such I value advice and critique, but I think advice/critique has a time and place. There is also a big difference between being helpful and being narcissistic. In the past I probably would have used a somewhat stronger George Carlin style dismissal, but I don't think burning bridges is the best approach anymore. Maybe I'm just an asshole, but do you have any advice on how to deal with this kind of verbal flogging?

    Read the article

  • How to become an expert web-developer?

    - by John Smith
    I am currently a Junior PHP developer and I really LOVE it, I love internet from first time I got into it, I always loved smartly-created websites, always was wondering how it all works, always admired websites with good design and rich functionality, and finally I am creating web-sites on my own and it feels really great. My goals are to become expert web-developer (aiming for creating websites for small and medium business, not enterprise-sized systems), to have a great full-time job, to do freelance and to create my own startup in future. General question: What do I do to be an expert, professional and demanded web-programmer? More concrete questions: 1). How do I choose languages and technologies needed? I know that every web-developer must know HTML+CSS+JS+AJAX+JQuery, I am doing some design aswell cause I like it and I need it for freelance also. But what about backend languages? Currently I picked PHP cause it's most demanded in my area and most of web uses it, but what would happen in future? Say, in 3 years, I am good at PHP and PHP frameworks by than, but what if some other languages get most popular? Do I switch to them? I know that good programmer is not about languages and frameworks but about ability to learn and to aim the goals, but still I think that learning frameworks for some language can take quite some time. Am I wrong? 2). In general, what are basic guidelines to be expert web-developer? What are most important things I should focus on? Thank you!

    Read the article

  • SQL SERVER – 2012 – List All The Column With Specific Data Types in Database

    - by pinaldave
    5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. I use every script which I write on this blog, the matter of the fact, I write only those scripts here which I was using at that time. It is quite possible that as time passes by my needs are changing and I change my script. Here is the updated script of this subject. If there are any user data types, it will list the same as well. SELECT s.name AS 'schema', ts.name AS TableName, c.name AS column_name, c.column_id, SCHEMA_NAME(t.schema_id) AS DatatypeSchema, t.name AS Datatypename ,t.is_user_defined, t.is_assembly_type ,c.is_nullable, c.max_length, c.PRECISION, c.scale FROM sys.columns AS c INNER JOIN sys.types AS t ON c.user_type_id=t.user_type_id INNER JOIN sys.tables ts ON ts.OBJECT_ID = c.OBJECT_ID INNER JOIN sys.schemas s ON s.schema_id = t.schema_id ORDER BY s.name, ts.name, c.column_id I would be very interested to see your script which lists all the columns of the database with data types. If I am missing something in my script, I will modify it based on your comment. This way this page will be a good bookmark for the future for all of us. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best practice for setting Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice for setting Effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of set operation like: private Matrix _world; public Matrix World { get{ return _world; } set { if (value == world) return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } } Thanking you in anticipation.

    Read the article

  • Microsoft MVP Award &ndash; Data Platform Development

    - by Dane Morgridge
    For those who don't already know, yesterday I received my first Microsoft MVP Award in Data Platform Development.  With less than 5,000 MVPs in the world overall and about 20 in the Data Platform category, saying I am honored would be an understatement.  From the first time I spoke at a code camp, I was totally hooked and have had a blast travelling around the east coast speaking at code camps and users groups.  I'd like to take the time to thank Dani Diaz (@danidiaz) for the nomination and everyone who supported me, especially my wife Lisa for letting me travel and speak as much as I have and putting up with me for late nights and such.  Roska Digital, my employer, also deserves a shout out for supporting me and giving me the necessary time off to get to speaking engagements.  With any luck, the next year will be at least as fun if not more than the last one has.  I hope to see you at a code camp or user group meeting soon! I would also like to send a congratulations to the other new Philly Area MVPs: John Angelini (@johnangelini) & Ned Ames (@nedames) You can find out more about the Microsoft MVP Award at https://mvp.support.microsoft.com/

    Read the article

  • Problem with signal handlers

    - by Hristo
    how can something print 3 times when it only goes the printing code twice? I'm coding in C and the code is in a SIGCHLD signal handler I created. void chld_signalHandler() { int pidadf = (int) getpid(); printf("pidafdfaddf: %d\n", pidadf); while (1) { int termChildPID = waitpid(-1, NULL, WNOHANG); if (termChildPID == 0 || termChildPID == -1) { break; } dll_node_t *temp = head; while (temp != NULL) { printf("stuff\n"); if (temp->pid == termChildPID && temp->type == WORK) { printf("inside if\n"); // read memory mapped file b/w WORKER and MAIN // get statistics and write results to pipe char resultString[256]; // printing TIME int i; for (i = 0; i < 24; i++) { sprintf(resultString, "TIME; %d ; %d ; %d ; %s\n",i,1,2,temp->stats->mboxFileName); fwrite(resultString, strlen(resultString), 1, pipeFD); } remove_node(temp); break; } temp = temp->next; } printf("done printing from sigchld \n"); } return; } the output for my MAIN process is this: MAIN PROCESS 16214 created WORKER PROCESS 16220 for file class.sp10.cs241.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld MAIN PROCESS 16214 created WORKER PROCESS 16221 for file class.sp10.cs225.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld and the output for the MONITOR process is this: MONITOR: pipe is open for reading MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs241.mbox MONITOR: end of readpipe ( I've taken out repeating lines so I don't take up so much space ) Thanks, Hristo

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • wubi dual-boot installation of ubuntu 12.04 on Windows 7 fails to boot

    - by Andrew
    I am trying to use the wubi installation process to create a Ubuntu 12.04 / Windows 7 dual boot setup on my Windows 7 machine (Dell Inspiron 17R). The installation initially works fine, and I am able to load Ubuntu several times after selecting it from the boot menu. However, when I boot into Windows 7 it seems to corrupt the Ubuntu boot process, because after running Windows 7, Ubuntu won't boot on the machine. It is still listed as an option in the boot menu, but when it is selected, the machine does one of the following: -hangs at the load-screen and says that Ubuntu is preparing to run for the first time (although it isn't the first time the OS has been loaded) -hangs with a black screen and does nothing I have uninstalled Ubuntu and then reinstalled it (using wubi) three times. Each time Ubuntu initially boots okay (including rebooting the laptop into Ubuntu several times.) However, whenever I switch over and boot into Windows 7 it breaks the Ubuntu installation. Windows 7 continues to boot and work fine without issues. I have successfully installed Ubuntu using wubi onto a different Windows 7 machine before without problems...it seems that there is something different about this laptop configuration. I am not sure how to debug the issue. I see no error messages during the Ubuntu boot process when it hangs and am not sure how to debug this.

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Central Ohio Day of .Net 2010 Slides and Files

    - by Brian Jackett
    This weekend I presented my “The Power of PowerShell + SharePoint 2007” session at the Central Ohio Day of .Net conference in Wilmington, OH this weekend.  This is the second year I’ve attended this conference, first time as a presenter.  For those unfamiliar Day of .Net conferences are a one-day conference on all things .NET organized by developers for developers.  These events are usually offered at no cost to anyone interested in .NET development.     The attendees of my session had some great questions and I hope they all got something worthwhile out of it.  Below are my slides and demo scripts (some of which I didn’t have time to demo) along with my sample profiles.  If you have any questions, comments, or feedback feel free to leave comments here or send me an email at [email protected].   Slides and Files SkyDrive link   Technology and Friends Interview Experience     On a side note, any of you familiar with one of my Sogeti co-workers in Detroit David Giard may know that he hosts a web series called Technology and Friends.  After my session David tracked me down and asked to interview me about PowerShell.  I was happy to oblige so we sat down and taped some material.  I don’t know when that interview will be going live, but look for it on www.davidgiard.com.   Conclusion     A big thanks goes out to all of the sponsors, speakers, and attendees for the Central Ohio Day of .Net conference.  Without all of them this conference couldn’t have been possible.  I had a great time at the conference and look forward to coming back next year whether that is as a speaker or attendee.       -Frog Out

    Read the article

  • The Splendiferous Array of Culinary Tools [Infographic]

    - by ETC
    If your geeking out extends from the workbench to the kitchen counter, you’ll love this swanky infographic detailing the families of utensils in your kitchen drawers and cupboards. The poster showcases everything from scissors to strainers in a retro-style poster. If you can find a culinary tool in your kitchen that isn’t on the chart then you’re obviously a culinary wizard of the highest order. You can hit up the link below to check out the poster in full-size and downloadable glory or head over to the design company that created it here (and pre-order a printed copy for your kitchen). A Complete Guide to Your Kitchen Tools [Fast Co. Design via Design Sponge] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • Teaching myself, as a physicist, to become a better programmer

    - by user787267
    I've always liked physics, and I've always liked coding, so when I got the offer for a PhD position doing numerical physics (details are not relevant, it's mostly parallel programming for a cluster) at a university, it was a no-brainer for me. However, as most physicists, I'm self taught. I don't have broad background knowledge about how to code in an object oriented way, or the name of that specific algorithm that optimizes the search in some kD tree. Since all my work so far has been more concerned about the physics and the scientific results, I undoubtedly have some bad habits - more so because my coding is my own, and not really teamwork. I have mostly used C since it is very straightforward and "what you write is what you get" - no need for fancy abstractions. However, I have recently switched to C++ since I'd like to learn more about the power that comes with abstraction, and it's pretty C-like (syntax-wise at least). How do I teach myself to code in a good, abstract way like a graduate in computer science? I know my code is efficient, but I want it to be elegant as well, and readable. Keep in mind that I don't have time to read several 1000-page tomes about abstract programming. I need to spend time on actual, physics related research (my supervisor would laugh at me if he knew I spent time thinking about how to program elegantly). How do I assess if my work is also good from a programmer's perspective?

    Read the article

  • Only One Month to OpenWorld-San Francisco!

    - by Stephen Slade
    From around the world, the city is expecting 50,000+ guests to flock to this annual extravaganza.  Over 2,000 sessions will focus on Oracle’s latest product offerings, customer case studies, panels of experts and a variety of other hardware, technology, middleware and applications. For those interested  in the latest capabilities delivered by Oracle’s supply chain applications, the ‘Focus-On’ documents are now avaiable to help guide you in your schedule builder. Schedule builder allows the capability to create a personalized agenda for the sessions you wish to attend, such as: Monday October 1, 2012 TIME TITLE LOCATION  3:15 pm –4:15 pm General Session: Supply Chain Management—Strategy, Update, and Roadmap Richard Jewell, Senior Vice President, Applications Development, Oracle Moscone West Level 2 Room 3014 Tuesday October 2, 2012 TIME TITLE LOCATION  10:15 am –11:15 am Oracle Fusion Supply Chain Management: Overview, Strategy, Customer Experiences, and Roadmap Jon Chorley, CSO & VP, Product Strategy, Oracle Moscone West  Level 2 Room 2006 There is an exciting lineup of about 100 supply chain sessions at OpenWorld. Contact your sales rep or Oracle Partner to obtain a copy of the most current Focus-On document, segmented by pillars such as Manufacturing, Maintenance/EAM, Value Chain Planning, Value Chain Execution, Procurement and Agile/Product Lifecycle Management.  They will provide you with a better informed view to schedule your time in San Francisco.

    Read the article

  • Life and Career guidance

    - by Andrei TheGiant Haxtor
    Hello programmers. I have a current dilemma I'm pondering over. I will be graduating from high school with ~60 credits worth of community college work (pre-engineering courses), and I am wondering what would experienced programmers suggest I do with my time since I have all of the bull courses out of the way. Should I start taking computer science/engineering courses or should I take some other courses that interest me?(psych, math) The reason I am asking this is, well , I like doing a lot of self studying, especially relating to software and tech. I don't like to have the pressure of hard classes on me, so I could make up for the time lost doing the CC courses and dive deep in programming and books. I've started getting into programming recently unfortunately, since I didn't have much time b/c of my course load. Right now I am doing Java and messing around with android. I would like to get involved in web&mobile development, operating systems, and finance software. If any of you experienced people could please give me some guidance and words of wisdom, I would greatly appreciated. Sorry that this isn't necessarily related to programming. All the best.

    Read the article

  • XQuery method question, trying to sum values read from xml

    - by Buck
    I'm pretty new to XQuery and I'm trying to write an example function that I can't get to work. I want to read an xml file, parse out the "time" values, sum them as they're read and return the sum. This is trivial and I'm looking to build more functionality into it but I'd like to get this working first. Also, I know there's a "sum" directive in XQuery that would do just this but I want to add more to it so the built-in sum is insufficient for my needs. Here's my funtion: bool example(Zorba* aZorba) { XQuery_t lQuery = aZorba-compileQuery( "for $i in fn:doc('/tmp/products.xml')//time" "let $sum := xs:integer($i)" " return $sum" ); DynamicContext* lCtx = lQuery-getDynamicContext(); lCtx-setContextItemAsDocument("temp_measurements.xml", lDocStream); try { std::cout << lQuery << std::endl; } catch (DynamicException& e) { std::cerr << e.getDescription() << std::endl; return false; } catch (StaticException& f){ std::cerr << f.getDescription() << f.getErrorCodeAsString(f.getErrorCode()) << std::endl; return false; } } It's called with an appropriate main(). If I comment out the line that starts "let $sum..." then this works in that it returns the time values as a series of integers like this: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3.... Input file looks like this: <?xml version="1.0" encoding="UTF-8"? <temps <temp <time0</time <lat0</lat <long0</long <value0</value </temp <temp <time1</time <lat0</lat <long1</long <value0</value </temp ...

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Recommended: IntelliCommand for Visual Studio 2010/2012

    - by WeigeltRo
    The Morning Brew is a great news source for developers for many years now. In its most recent post it mentioned an extension for Visual Studio 2010 and 2012 called IntelliCommand that implements something that I had wanted for quite some time: Some kind of dynamic help for hotkeys. IntelliCommand shows a popup when you press and hold Ctrl, Shift or Alt (or combinations thereof) for a configurable amount of time, or after you press the first key combination of a chord shortcut key (e.g. Ctrl-E) and wait for an (independently configurable) amount of time. In the following screenshot I pressed and released Ctrl-E, and after a short delay the popup appeared: The extension is available in the Visual Studio Gallery, so finding, downloading and installing it via the Extension Manager is extremely simple: The default delays (2000 / 1600 milliseconds) are a bit long for my liking, but this can be changed in Tools – Options: So far things are working great on my machine. Some known issues do seem to exist, though (e.g. that the extension doesn’t work on non-EN versions of Visual Studio). See the author’s comments in the announcement blog post and in the Visual Studio Gallery for more information.

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • Optimizing hash lookup & memory performance in Go

    - by Moishe
    As an exercise, I'm implementing HashLife in Go. In brief, HashLife works by memoizing nodes in a quadtree so that once a given node's value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8x8 level, you remember it by its four children (each at the 2x2 level). So next time you see an 8x8 node, when you calculate the next generation, you first check if you've already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you're 10 levels above the leaves. Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of {&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node So my lookup function is this: func FindNode(ul, ur, ll, lr *Node) *Node { var node *Node var ok bool nc := NodeChildren{ul, ur, ll, lr} node, ok = NodeMap[nc] if ok { return node } node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil} NodeMap[nc] = node return node } What I'm trying to figure out is if the "nc := NodeChildren..." line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this? Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I've written in Go so I'd love any feedback)

    Read the article

  • A Trip to the Moon (Le Voyage dans la Lune) [Super Retro Classic Sci-Fi Video]

    - by Asian Angel
    If you are into retro sci-fi movies, then you will definitely want to have a look at this French classic from 1902. This silent movie is only 10.5 minutes long, but is well worth watching and makes for a fun romp through the early days of sci-fi. From YouTube: A Trip to the Moon (French: Le Voyage dans la lune) is a 1902 French black and white silent science fiction film. It is loosely based on two popular novels of the time: From the Earth to the Moon by Jules Verne and The First Men in the Moon by H. G. Wells. The film was written and directed by Georges Melies, assisted by his brother Gaston. The film runs 14 minutes if projected at 16 frames per second, which was the standard frame rate at the time the film was produced. It was extremely popular at the time of its release and is the best-known of the hundreds of fantasy films made by Melies. A Trip to the Moon is the first science fiction film, and utilizes innovative animation and special effects, including the iconic shot of the rocketship landing in the Moon’s eye. A Trip to the Moon / Le Voyage dans la lune – 1902 [via 20 best designs in sci-fi movies - Page 3 (Creative Bloq)] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • On the frontier between work and home

    - by MPelletier
    I think we've all been there: You hear of someone say "hey, wouldn't it be nice if platform X had feature Y?" You look around (on SO!), the feature really doesn't exist, even though it probably would be useful in many contexts. So it's pretty generic. Your mind wanders for a bit. "How tough would it be? Well, it'd probably be just a snippet. And an ad-hoc function. And maybe a wrapper." And boom, before you know it, you've spent a dozen hours of your free time implementing a FooFeature that's really neat and generic. The kind of code you might not even have the time to spit and shine at work, that would be a bit rushed and not so documented. So now you wonder "wouldn't this be useful to others?" And you've got your blog, maybe a CodeProject account, and your colleague who asked if FooFeature exists might, haphazardly, come accross that blog entry, had it existed before they told you. On the otherhand, the NDA agreement. It's sort of vague and general. It doesn't forbid you from coding at home, but it's clear on sharing company code, that's a big NO. But this isn't company code. Or is it? Or will it be? So, what do you do with code (that's more than just a snippet) you wrote in your off time with universality in mind but an idea that came from work, and that will most likely be used at work? Can it be published?

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >