Search Results

Search found 8083 results on 324 pages for 'total newbie'.

Page 191/324 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Setting user's group and umask has no effect

    - by Andrew Vit
    I'm trying to allow my "deploy" user to have access to files created by www-data: I added "deploy" to the www-data group. I set umask to 002. When I run the following commands, I'm not seeing the result I expect: deploy@ubuntu-lucid-32-generic:/var/www$ groups www-data adm dialout cdrom plugdev lpadmin sambashare admin deploy sysadmin deploy@ubuntu-lucid-32-generic:/var/www$ newgrp www-data deploy@ubuntu-lucid-32-generic:/var/www$ umask 0002 deploy@ubuntu-lucid-32-generic:/var/www$ mkdir test deploy@ubuntu-lucid-32-generic:/var/www$ ls -la test total 0 drwxr-xr-x 1 deploy deploy 68 Nov 7 20:37 . drwxr-xr-x 1 deploy deploy 476 Nov 7 20:37 .. I see that: The folder doesn't belong to the www-data group. The folder permissions don't have group-write (775). Note that the /var/www directory is owned by the deploy user: drwxr-xr-x 1 deploy deploy 510 Nov 7 20:45 . How can I give www-data selective access to directories? Or, how to share the /var/www directory with my deploy user: I don't care who owns it, as long as I can write to it, and so can www-data. (Ideally I would set up a directory with SGID access for www-data.)

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Oracle Number One in Supply Chain Planning

    - by Stephen Slade
    Something nice to write home about!  Saw this accomplishment and worth promoting, with special Congrats to the VCP team. Read on: Summary: Oracle is the #1 player in  Supply Chain Planning  according to research firm ARC Advisory Group Details: The report (Source: ARC Advisory Group, “Supply Chain Planning Worldwide Outlook, Market Analysis and Forecast through 2016,” Clint Reiser, Steve Banker), gives Oracle 21.1% of revenue share, compared to SAP, who was second at 18.6%. JDA Software, Aspen, Logility, and Infor were the next players in the market. The total market was valued at $1.506B. ARC counts Software (new license and upgrades), Implementation Services, Maintenance and Support, and SaaS, in its definition. ARC defines supply chain planning to include four key application areas: Extended SCP, Manufacturing Planning, Inventory/Distribution Planning, and Demand Management. Extended SCP consists of Network Design, Capable to Promise, SCP Composites, and Extended Supply Chain BI software. In the report, ARC further gives Oracle the number one spot in both Software Revenues and Services Revenues subsegments, as well as in many vertical areas such as Government, Electronics and Electrical, Medical Products, Pharmaceutical, and Wholesale/Distribution. ARC also issued a forecast, that predicts SCP revenue to grow from $1.506B in 2011 to $2.172B in 2016, with a CAGR of 7.6%. The report has several positive quotes about Oracle, including calling Oracle a “visionary,” and states that “Oracle has leveraged a broad set of home-grown and acquired offerings to create a comprehensive, integrated, yet modular suite with applicability to a wide range of industries,” Blog Link: http://blog.us.oracle.com/marketdata/?97119896  (shawn willett@oracle com)

    Read the article

  • The biggest ADF conference "down under"

    - by Chris Muir
    While Oracle Open World is the place to be for ADF presentations, for Aussies living in Perth, San Francisco is a tad far away (believe me from experience, the 23hrs flight from PER-SYD-SFO is tedious).  That's why I'm very excited to see that the Australian Oracle User Group at this year's Perth conference is running its largest set of ADF presentation to date: 5! Okay, it doesn't compare to the 60 ADF sessions at OOW, but it's a small conference of around 300 people that runs for 2 days with 54 sessions total, not 40000 people that runs for 5 days with 1900+ sessions, so I think that's a good effort for a conference that's at the end of the earth! What's even better about this year's conference, is the AUSOUG conference is moving away from just consultants and Oracle staff presenting, but will also include customers presenting on ADF too.  This again proves Perth is a little ADF hotspot, which puts a tear to an ADF product manager's eye let me tell you ;-) The ADF sessions will include: Kevin Payne - JWH Group - ADF Mobile Application Development Matthew Carrigy - Department of Finance Western Australia - The times, they are a-changin’ - An Oracle Forms to JDeveloper ADF  Case Study Penny Cookson & Chris Noonan - Sage Computing Services - Impress your bosses with JDeveloper ADF dashboards on their iPads ...oh and... Chris Muir - Oracle Corporation - Speed-Dating Oracle JDeveloper 12c and Oracle ADF New Features  Chris Muir - Oracle Corporation - Develop Mobile Apps for Smart Devices: Converging Web and Native Applications You can check out the conference schedule here.  I hope you'll support these ADF presenters by attending the AUSOUG Perth conference, I look forward to seeing you there.

    Read the article

  • The Raspberry Pi Now Has Its Own App Store

    - by Jason Fitzpatrick
    Raspberry Pi, the credit-card sized computer with an ARM processor, now has its own appstore where Raspberry Pi hobbyists and developers can share their creations in a one-stop location accessible to all Raspberry Pi users. In today’s press release about the store, the Raspberry Pi Foundation writes: We’ve been amazed by the variety of software that people have written for, or ported to, the Raspberry Pi. Today, together with our friends at IndieCity and Velocix, we’re launching the Pi Store to make it easier for developers of all ages to share their games, applications, tools and tutorials with the rest of the community. The Pi Store will, we hope, become a one-stop shop for all your Raspberry Pi needs; it’s also an easier way into the Raspberry Pi experience for total beginners, who will find everything they need to get going in one place, for free. The store runs as an X application under Raspbian, and allows users to download content, and to upload their own content for moderation and release. At launch, we have 23 free titles in the store, ranging from utilities like LibreOffice and Asterisk to classic games like Freeciv and OpenTTD and Raspberry Pi exclusive Iridium Rising. We also have one piece of commercial content: the excellent Storm in a Teacup from Cobra Mobile. For more information about the store, including how to install the app store on your Pi, check out the full press release here. To get started browsing the store, hit up the link below. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • winusb formatting and installation error, lost data storage on 32gb flash drive

    - by Cary Felton
    i recently purchased a pny brand 32gb usb drive to use for configuring a dual boot on my computer, and for hdd files backup. i also recently installed zorin os 6.1 as my main os, and then downloaded winusb for linux, and set it to install the iso for windows 8 release preview as the activation code is still good untill jan.2013, and there was an error on the installation. i rebooted the computer, plugged in the usb drive, and i went from 32gb usable space to somehow having a partitioned drive (mounted as two separate drives) one being 3.7gb, and the other being i believe 25gb or so... i then formatted the drive with gparted back to fat32, and it still read as a windows usb with the installation files on it still? so then i took my usb drive to my library which runs windows 8, i installed bootice, and completely reformatted my usb drive, and somehow it is usable, but still not perfect. it reads i only have 29.9gb usable space... i have already thought of the non usable area, and that doesnt account for this error as when i first bought the drive and plugged it in on linux, it read total drive space was 32.2 and exactly 32 was usable. somehow i am short by 2gb of space which is very critical for what i am doing. i love linux, i only needed windows for bluray playback with daplayer as it wont work well in wine, but if i keep losing space on my usb drives im afraid ill have to switch back. any help would be appreciated as i already visited pny's site, and they have no support for this issue, and im not that fond of partitions, file systems, or formatting.

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • SQL SERVER – New Look for CodePlexProject – Hosting for Open Source Software

    - by pinaldave
    Codeplex is my favorite site. CodePlex is Microsoft’s free open source project hosting site. You can create projects to share with the world, collaborate with others on their projects, and download open source software. It is great place to find so many open source project available to explore. All the softwares are free and open source. I often go there at intervals to check what is new in SQL Server field as well on other technologies. Yesterday when I visited it, I had nice surprise as it has total makeover and looks very decent as well elegant at the same time. I have noticed that when I talk about Codeplex is user community, not everybody knows about it. The quickest way I explain what is codeplex is that I start naming few of the projects which are available there and suddenly I start noticing a few hands going up knowing the projects. This is indirect way to prove that many of us know CodePlex usability but do not pay special attention to what it is actually. Let me name a few popular projects of the CodePlex here. SQL Server Sample Database [link] Image Resizer for Windows [link] Ajax Control Toolkit [link] Skype Voice Changer [link] Silverlight Toolkit [link] Windows 7 USB/DBD Download Tool [link] Orchard Project [link] There are very interesting SQL Server projects available on Codeplex as well. I am listing few of them here for reference in listed in no particular order. SQL Server Sample Database [link] SQL Server Compact ToolBox [link] Microsoft Drivers for PHP for SQL Server [link] Internals Viewer for SQL Server [link] SQL Server Spatial Tooks [link] SQL Monitor – managing sql server performance [link] SQL Server 2008 Extended Events SSMS Addin [link] How many of above mentioned project have you come across earlier? Leave a comment it will be interesting to know what our community is familiar with. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • Scientists Demonstrate First-Person Shooter Games Improve Vision

    - by Jason Fitzpatrick
    Need an excuse to log a few more hours playing Call of Duty or Medal of Honor? Scientists demonstrated improved vision in test subjects after daily doses of first-person shooter games. Scientists at McMaster University took subjects who, as the result of surgery correcting congenital cataracts, had less than 20/20 vision. Subjects played Medal of Honor for a total of 40 hours over the course of 4 weeks before having their vision retested. The results? The CBC reports: The participants found improvements in detail, perception of motion and in low contrast settings. In essence, players could now read about one to one-and-a-half more lines on an optometrist’s eye chart. “We were thrilled,” Lewis said. “It’s very exciting to open up a new world of hope for these people.” How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Oracle at Work videó: Banca Transilvania, Exadata és Exalogic, alkalmazások és adattárház

    - by user645740
    Az ORACLE AT WORK videók sorában most a Banca Transilvania romániai bank felso vezetoi osztják meg gyakorlati tapasztalataikat az Exadata Database Machine és az Exalogic megoldások banki muködésérol. Videó link: Video Case Study with Banca Transilvania.    A kolozsvári székhelyu Banca Transilvania a harmadik legnagyobb bank Romániában, 1,5 millió aktív ügyfelet kezel, a bank növekszik. Amikor 1999-ben megjelent a II. generációs Exadata V2-es sorozat, ebbol a Banca Transilvania volt az elso banki vásárló a világon. Eloször az adattárházukat helyezték át Exadatára, hatalmas teljesítmény ugrást tapasztaltak, amit az üzlet ki is tudott használni, több riport, extrém jó válaszidokkel, gyorsabb batch futások. A cél a banki architektúra Exadata és Exalogic rendszerre történo konszolidálása. A videóban az Exadata + Exalogic megoldásuk elonyeirol a Banca Transilvania vezetoi beszélnek: Robert C. Rekkers, CEO;  Leontin Toderici, COO;  Marius Ursuti, IT Director. az alkalmazásaikat Oracle-ön konszolidálják kártya rendszer új core-banking rendszer: FLEXCUBE is az Exadatán fog muködni, ez segíti a bankot a növekedésben, a növekvo ügyfélszám, tranzakciók kezelésében Siebel CRM tesztelik az Exa architektúrán az Oracle E-Business Suite alkalmazásokat is legjobb teljesítmény Oracle adatbázisokra, kisebb válaszidok, adattárház és Oracle OLAP: költségcsökkenés, 30-szor gyorsabb muködés Exalogic: alkalmazásszerverek konszolidációja WebLogic szerveren a bank tesztjei is azt mutatják, hogy az Exalogic elonyei még jobban kidomborodnak az Exadata használatával együtt a gépek beállítását követoen 24 órán belül már kezdhetik a munkát, az elore telepített szoftverekkel, a gyárilag alaposan tesztelt és hangolt gépekkel egyszerubb architektúra, ennek menedzsmentje is kisebb költségu általánosan minden rendszer bevezetési ideje lecsökken az Exa architektúrán lehetové teszi új termékek és szolgáltatások gyorsabb bevezetését költségcsökkentés, teljes költség, Total Cost csökkent kiemelték az Oracle Corporation  felkészült csapatának rugalmasságát az Oracle folyamatos fejlesztése, innovációja, supportja Videó link: Video Case Study with Banca Transilvania.

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Global vs. Local Monthly Searches in Adwords keyword tool

    - by Gregory
    I'm trying to learn how to use a keyword tool in Adwords. Here's what I entered: Country- Russia Language-Russian Desktop and laptop devices And the keyword was ???? ? ??????? (tours to Israel in Russian Cyrillic letters) . As a broad match type... Now... the results that I got were: Global monthly: 60,500 Local monthly: 40,500 If I got it right..."Global monthly" means in this context : worldwide average monthly searches for this search term in ANY language in any Google search site (google.ru, google.com.ua, google.com, google.fr etc.). It's all nice, BUT... Then I made an query for tours to Israel in English in the US...And I got: Global monthly: 60,500 Local monthly: 27,100 That doesn't make any sense to me though! How come the total sum (the global) is actually a smaller number than a combined sum of just TWO countries??? (27,100+40,500=67,60060,500) By "any language" they mean a translation of the term into ANY possible language???Or maybe by "language" Google means the language of searchers' operating system? or their browsers' language?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • ArchBeat Link-o-Rama for October 22, 2013

    - by OTN ArchBeat
    The road ahead for WebLogic 12c | Edwin Biemond Oracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 2: Setup and Configuration | Michael Rainey Michael Rainey continues his series with another technical article for you GoldenGate fans. There's A Virtual Developer Day in Your Future Have you experienced OTN VDD? Relax, it's not something that requires medical attention. But an OTN Virtual Developer Day event will enlarge your brain with hands-on information on Oracle technologies. Upcoming events will cover Oracle WebLogic and Coherence (Nov 5) and Oracle ADF (Nov 19). My Summary of Oracle Open World 2013 | Luis Weir SOA/Middleware specialist Luis Weir's first trip to Oracle OpenWorld was what you might call a total immersion experience. His blog post includes details about what kept him very, very busy during his OOW13 experience. Live Blog: Book Review of Building Modular Cloud Apps with OSGi by Bert Ertman and Paul Bakker | Lucas Jellema This interesting post from Oracle ACE Director Lucas Jellema is a work in progress. He's updating as he goes. Check it out. Thought for the Day padding-left: 20px; padding-right: 20px;"> "In the information age, you don't teach philosophy as they did after feudalism. You perform it. If Aristotle were alive today he'd have a talk show." — Timothy Leary, American psychologist and writer (October 22, 1920 – May 31, 1996) Source: brainyquote.com

    Read the article

  • How can I find the shortest path between two subgraphs of a larger graph?

    - by Pops
    I'm working with a weighted, undirected multigraph (loops not permitted; most node connections have multiplicity 1; a few node connections have multiplicity 2). I need to find the shortest path between two subgraphs of this graph that do not overlap with each other. There are no other restrictions on which nodes should be used as start/end points. Edges can be selectively removed from the graph at certain times (as explained in my previous question) so it's possible that for two given subgraphs, there might not be any way to connect them. I'm pretty sure I've heard of an algorithm for this before, but I can't remember what it's called, and my Google searches for strings like "shortest path between subgraphs" haven't helped. Can someone suggest a more efficient way to do this than comparing shortest paths between all nodes in one subgraph with all nodes in the other subgraph? Or at least tell me the name of the algorithm so I can look it up myself? For example, if I have the graph below, the nodes circled in red might be one subgraph and the nodes circled in blue might be another. The edges would all have positive integer weights, although they're not shown in the image. I'd want to find whatever path has the shortest total cost as long as it starts at a red node and ends at a blue node. I believe this means the specific node positions and edge weights cannot be ignored. (This is just an example graph I grabbed off Wikimedia and drew on, not my actual problem.)

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >