Search Results

Search found 60012 results on 2401 pages for 'training data'.

Page 554/2401 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • I have an apache process that takes 98% CPU. How can I find what apache call it runs?

    - by Nir
    As you can see below, a single Apache process hangs and takes large amount of CPU resources. How can I find what http call this apache process runs? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12554 www-data 20 0 776m 285m 199m R 97 3.7 67:15.84 apache2 14580 www-data 20 0 748m 372m 314m S 4 4.8 0:13.60 apache2 12561 www-data 20 0 784m 416m 322m S 3 5.4 0:58.10 apache2 12592 www-data 20 0 785m 427m 332m S 2 5.6 0:57.06 apache2

    Read the article

  • Bayes Misclassification error and plot : pattern recognition [closed]

    - by user1214586
    Below is a Matlab code for Bayes classifier which classifies arbitrary numbers into their classes. training = [3;5;17;19;24;27;31;38;45;48;52;56;66;69;73;78;84;88]; target_class = [0;0;10;10;20;20;30;30;40;40;50;50;60;60;70;70;80;80]; test = [1:2:90]'; class = classify(test,training, target_class, 'diaglinear'); % Naive Bayes classifier [test class] If someone could provide code snippets for calculating the Bayes error for misclassification and accuracy.Also, is it possible to plot a scatter plot and histogram indicating the number of data points belonging to different classes? Thank you.

    Read the article

  • Database Security Events in April

    - by Troy Kitch
    Wed, Apr 18, Executive Oracle Database Security Round Table - Tampa, FL Tue, Apr 24, ISC(2) Leadership Regional Event Series - San Diego, CA April 24 - May 17,  Independent Oracle Users Group Enterprise Data at Risk Seminar Series Tue, Apr 24 IOUG Enterprise Data at Risk Seminar Series - Toronto Wed, Apr 25 IOUG Enterprise Data at Risk Seminar Series - New York Thu, Apr 26 IOUG Enterprise Data at Risk Seminar Series - Boston Thu, Apr 26 ISC(2) Leadership Regional Event Series - San Jose, CA

    Read the article

  • Don't Miss This Week's Webinars!

    - by [email protected]
    Wednesday, April 14th - 11:00 am PT - 12:00 pm PT Oracle User Productivity Kit: Best Practices for Getting the Most out of your Student Information System and ERP. Register now! K-12 organizations cannot afford to risk deploying mission critical applications like student information systems and ERPs without complete confidence they will live up to expectations. Find out how Oracle UPK can ensure success. Wednesday, April 14th - 10:00 am PT - 11:00 am PT Utilizing Oracle UPK for More than Just Training. Register now! HEUG webinar featuring Beth Renstrom, Senior Manager, Oracle UPK Product Management and James Barber, Partner PM with ERP Analysts. Discover how Oracle UPK can be utilized well beyond just training development and delivery. Thursday, April 15th - 10:00 am PT - 11:00 am PT UPK Productive Day One. Register now! Learn how to maximize your applications investment, increase employee productivity, and mitigate risk through all phases of the project lifecycle with Oracle UPK.

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • Characteristics of a Web service that promote reusability and change

    Characteristics of a Web service that promote reusability and change:  Standardized Data Exchange Formats (XML, JSON) Standardized communication protocols (Soap, Rest) Promotes Loosely Coupled Systems  Standardized Data Exchange Formats (XML, JSON) XML W3.org defines Extensible Markup Language (XML) as a simplistic text format derived from SGML. XML was designed to solve challenges found in large-scale electronic publishing. In addition,  XML is playing an important role in the exchange of data primarily focusing on data exchange on the web. JSON JavaScript Object Notation (JSON) is a human-readable text-based standard designed for data interchange. This format is used for serializing and transmitting data over a network connection in a structured format. The primary use of JSON is to transmit data between a server and web application. JSON is an alternative to XML. Standardized communication protocols (Soap, Rest) Soap W3Scools.com defines SOAP as a simple XML-based protocol. This protocol lets applications exchange data over HTTP.  SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages. Rest In 2007, Stefan Tilkov defines Representational State Transfer (REST) as a set of principles that outlines how Web standards are supposed to be used.  Using REST in an application will ensure that it exploits the Web’s architecture to its benefit. Promotes Loosely Coupled Systems “Loose coupling as an approach to interconnecting the components in a system or network so that those components, also called elements, depend on each other to the least extent practicable. Coupling refers to the degree of direct knowledge that one element has of another.” (TechTarget.com, 2007) “Loosely coupled system can be easily broken down into definable elements. The extent of coupling in a system can be measured by mapping the maximum number of element changes that can occur without adverse effects. Examples of such changes include adding elements, removing elements, renaming elements, reconfiguring elements, modifying internal element characteristics and rearranging the way in which elements are interconnected.” (TechTarget.com, 2007) References: W3C. (2011). Extensible Markup Language (XML). Retrieved from W3.org: http://www.w3.org/XML/ W3Scools.com. (2011). SOAP Introduction. Retrieved from W3Scools.com: http://www.w3schools.com/soap/soap_intro.asp Tilkov, Stefan. (2007). A Brief Introduction to REST. Retrieved from Infoq.com: http://www.infoq.com/articles/rest-introduction TechTarget.com. (2011). loose coupling. Retrieved from TechTarget.com: http://searchnetworking.techtarget.com/definition/loose-coupling

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • The Basics of Project Management / Software Development

    - by Sam
    It suddenly struck me today that I have never developed any large application or worked with a team of programmers, and so am missing out a lot - both in terms of technical knowledge and the social-fun part of it. And I would like to rectify that - an idea is to start an open source group by training college students (for no charge) and developing some open source application with them. Please give me some basic advice on the whole process of how to (1) plan and (2) manage projects in a team. What new skill sets would you recommend? (I have read joel on software and 37 Signals, and got many insightful tips from them. But I'd like a little more technical knowledge ...) Background (freelancer, past 4+ years) - Computer engineer graphic / web designer online marketing moved on to programming in PHP, Perl, Python did Oracle DBA OCP training to understand DB's current self-assigned title - web application developer.

    Read the article

  • Web developer has become uncooperative, what should we do to rescue our site? [closed]

    - by TOM
    What can an individual or a company do if the web designer who has designed their website becomes completely uncooperative? In our case He refuses to meet with us for discussions. He refuses to give us training on the effective use and management of the website he has designed for us (and has been paid in full for) despite this training being part of the original contract. Last year he disappeared for a number of months ,refused to answer emails or phobe calls and was totally unavailable to help us He refuses to give us the details of the hosting of our website. We have totally lost faith in this arrogant and unreasonable guy and would like to break off all relationships with him but ,it appears, he's got us over a barell

    Read the article

  • links for 2011-02-10

    - by Bob Rhubart
    Manish Devgan: Extending WebCenter Spaces Using JDeveloper In addition to being able to customize WebCenter Spaces using the browser-based tools, you can now also customize and “extend” WebCenter Spaces in many ways in JDeveloper.  (tags: oracle enterprise2.0 webcenter jdeveloper) Oracle University: New Personalized Training Catalog "Searching for training classes just got easier with Oracle University's new Personalized Training Catalog. View upcoming course schedules for the topics that you select in your preferred locations. Browse courses when you need to or request your personalized catalog to be emailed to you." (tags: oracle oracleuniversity) René van Wijk: Hibernate and Coherence « Middleware Magic "A major justification for the claim that applications using an object/relational persistence layer are expected to outperform applications built using direct JDBC is the potential for caching." - René van Wijk (tags: oracle coherence middleware) Sten Vesterli on Fusion Applications: " It’s (almost) here!" Speaking of Fusion Applications, Oracle ACE Director Sten Vesterli says: "The usability revolution has finally caught up with enterprise applications; they will no longer be built based on the capabilities of the database, but on the needs of users." (tags: oracle otn oracleace fusionapplications) The Myth of Oracle Fusion | The ORACLE-BASE Blog "I can totally understand when people on the outside of our little goldfish bowl have a really bad and confused impression of anything containing the term “Fusion”, because it does have a very long and sordid history." Oracle ACE Director Tim Hall (tags: oracle otn oracleace fusionapplications) The Other Side of XBRL (Enterprise Performance Management Blog) With the United States SEC's mandate for XBRL filings entering its third year, and impacting over 7000 additional companies in 2011, there's a lot of buzz in the industry about how companies should address the new reporting requirements. (tags: oracle xbrl compliance) Database Vault integration available (The Shorten Spot) Anthony Shorten shares information on the Database Vault solution included in the Oracle Utilities Application Framework. (tags: oracle database) SOASuite 11.1.1.4 : Error Logging into BPM11g Composer? (Angelo Santagata's Blog) Angelo Santagata shares simple solutions to a few minor SOA Suite 11.1.1.4 issues. (tags: oracle soa soasuite bpm) Thierry Vergult: No electricity, but the application is up "Dakar is having more troubles then normal with electricity. Never thought that the SaaS model would be that useful when the light goes out. And the extra battery in the office dies, and the router goes down. But you still can access the application over your smartphone and finish your payroll run." (tags: oracle cloud saas)

    Read the article

  • PHP may be executing as a "privileged" group and user, which could be a serious security vulnerability

    - by Martin
    I ran some security tests on a Ubuntu 12.04 Server, and I've got these warnings : PHP may be executing as a "privileged" group, which could be a serious security vulnerability. PHP may be executing as a "privileged" user, which could be a serious security vulnerability. In /etc/apache2/envvars, I have this: export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data And all files in /var/www are having these user/group: www-data:www-data Am I setting this correctly? What should I do to fix this problem?

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • The need for user-defined index types

    - by Greg Low
    Since the removal of the 8KB limit on serialization, the ability to define new data types using SQL CLR integration is now almost at a usable level, apart from one key omission: indexes. We have no ability to create our own types of index to support our data types. As a good example of this, consider that when Microsoft introduced the geometry and geography (spatial) data types, they did so as system CLR data types but also needed to introduce a spatial index as a new type of index. Those of us that...(read more)

    Read the article

  • Microsoft Virtual Academy

    Carpe Diem It's been since a while that I could write an article for this blog but alas, I was (and still am) very busy with customer's work. Which is actually good. So, what is this article going to tell you? Well, in general, just what I already tweeted, that life is constant process of learning - especially as software craftsman. Due to an upcoming new customer project in ASP.NET I had to seize the opportunity to get my head deeper into latest available technologies, like Windows Azure and SQL Azure. I know... cloud computing and so on is not a recent development and already available since quite a while but I never any means to get myself into this since roughly two weeks ago. Microsoft Virtual Academy I can't remember exactly what guided me towards the Microsoft Virtual Academy (MVA), oh wait... Yes, it was a posting on Facebook from an old CLIP community friend. He posted a shortened URL with #MVA tag that caught my attention. Thanks for that Thomas Kuberek. After the usual sign in or registration via Live ID I was a little bit surprised that Mauritius is not an available country option... Quick mail exchange with the MVA Decan, and yeah, apologies for the missing entry. So, currently I'm learning about Microsoft products and services, and collecting points under "Not Listed Country" until Mauritius is going to be added. Hopefully soon, as MVA honors your effort with different knowledge ranks that are compared to other students with public profiles. I think it's a nice move to add some game and competition factor into the learning game. The tracks and their different modules are mainly references to publicly available material online, namely on either MSDN, TechNet, Channel9, or other Microsoft based sites. The course material therefore also varies in different media and formats, ranging from simple online articles over downloadable documents (.docx or .pdf) to Silverlight / Windows Media streams with download options. Self-assessment and students ranking Each module in a track can be finished by taking part in a self-assessment. Up to now, the assessment I did (and passed) were limited to 10 minutes available time, and consisted of six to seven questions on the module training material. Nothing too serious but it gives you a glimpse idea how Microsoft certification exams are structured. Conclusion Nothing really new but nicely gathered, assembled and presented to the MVA students. At the moment, I wouldn't dare to compare the richness and quality of those courses with professional training offers, like Pluralsight .NET Training, LearnDevNow, VTC, etc. at all, but I think that MVA has potential. Give it a try, and let me know about your opinions.

    Read the article

  • Vancouver .NET User Group

    - by pluginbaby
    While in Vancouver for my Silverlight training in early May, I will give a free Silverlight presentation at the local .NET User Group, .netBC. When: May 5, 2010, 6:30 PM Where: Building SW3 room 1750, BCIT Burnaby Campus, 3700 Willingdon Ave, Burnaby, BC, V5G 3H2 What: Silverlight 4 Business Applications “In this session a live demo will be built to show the new features of Silverlight 4 that helps you create business-oriented applications easier than ever. Importing/exporting data, printing, drag drop target, data visualization, context menu, WCF RIA Services and design-time enhancements in Visual Studio 2010.” Read all the details here: http://www.netbc.ca/DNCal/EventDetail.aspx?date=2010/05/05 See you there!   Thanks Telerik for helping this event to happen!   Technorati Tags: Silverlight,Silverlight Training

    Read the article

  • Isn't MVC anti OOP?

    - by m3th0dman
    The main idea behind OOP is to unify data and behavior in a single entity - the object. In procedural programming there is data and separately algorithms modifying the data. In the Model-View-Controller pattern the data and the logic/algorithms are placed in distinct entities, the model and the controller respectively. In an equivalent OOP approach shouldn't the model and the controller be placed in the same logical entity?

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • Vertriebsthemen mit denen Sie sich spezialisieren können!

    - by [email protected]
    Oracle Datenbank 11g Release 2Im Anschluss an das Training besteht die Möglichkeit, den von diesem Training unabhängigen Spezialisierungs Assessment-Test, in Anwesenheit von Oracle Presales abzulegen. Das Bestehen des Assessment-Tests setzt Ihr Selbststudium und das Durchlaufen des jeweiligen Guided Learning Paths voraus.  Bitte bringen Sie für den Assessment-Test einen WLAN-fähigen Laptop mit, der mit einem handelsüblichen Virenscanner und einer Firewall ausgestattet sein muss. Wir empfehlen Ihnen, sich vorab mit dem Assessment vertraut zu machen.Termine: 9. Juni bei Oracle in Stuttgart mit Azlan 16. September bei Oracle in Frankfurt  9. November bei Oracle in Berlin mit Actebis-PeacockAnmeldung:Für weitere Informationen und die Registrierungsmöglichkeit klicken Sie bitte hier. 

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Announcement Ramp-Up Days

    - by michaela.seika(at)oracle.com
    We are glad to announce a new training format: Ramp-Up Days to support some of the most preferred specializtion topics (e. g.: Oracle DB11g Implementation Specialist and Oracle DB 11g RAC Implementation Specialists) It is a one day trainer-led course on one Oracle Specialization topic. The course takes place at an Oracle Partner with a Pearson Vue Testcenter nearby. Partners can do their assessment by the end of the training and walk home as implementation specialist the very same day. Please have a look to our schedule for Ramp-Up Day so far (more to come): Ramp-Up Days in Germany & SwitzerlandRamp-Up Days in Italy

    Read the article

  • Building an MVC application using QuickBooks

    - by dataintegration
    RSSBus ADO.NET Providers can be used from many tools and IDEs. In this article we show how to connect to QuickBooks from an MVC3 project using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process can be used with any of our ADO.NET Providers. Creating the Model Step 1: Download and install the QuickBooks Data Provider from RSSBus. Step 2: Create a new MVC3 project in Visual Studio. Add a data model to the Models folder using the ADO.NET Entity Data Model wizard. Step 3: Create a new RSSBus QuickBooks Data Source by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select all the tables and views you need, and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Creating the Controller and the Views Step 6: Add a new controller to the Controllers folder. Give it a meaningful name, such as ReceivePaymentsController. Also, make sure the template selected is 'Controller with empty read/write actions'. Before adding new methods to the Controller, create views for your model. We will add the List, Create, and Delete views. Step 7: Right click on the Views folder and go to Add -> View. Here create a new view for each: List, Create, and Delete templates. Make sure to also associate your Model with the new views. Step 10: Now that the views are ready, go back and edit the RecievePayment controller. Update your code to handle the Index, Create, and Delete methods. Sample Project We are including a sample project that shows how to use the QuickBooks Data Provider in an MVC3 application. You may download the C# project here or download the VB.NET project here. You will also need to install the QuickBooks ADO.NET Data Provider to run the demo. You can download a free trial here. To use this demo, you will also need to modify the connection string in the 'web.config'.

    Read the article

  • How do I start the postgreSQL service upon boot?

    - by Homunculus Reticulli
    I am running PostgreSQL (v 8.4) on Ubuntu 10.0.4. The PG service currently starts on reboot (after I installed PG on my machine), however, I want the service to use a new data directory. Currently, after a reboot, I have to: Stop the currently running PG service manually type: /usr/local/pgsql/bin/pg_ctl start -D /my/preffered/data/directory -l /usr/local/pgsql/data/logfile Which file do I need to edit to ensure that I always have the service using the correct data folder?

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >