Search Results

Search found 9011 results on 361 pages for 'common'.

Page 104/361 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-30

    - by Bob Rhubart
    The One Skill All Leaders Should Work On | Scott Edinger blogs.hbr.org Assertiveness, according to HBR blogger Scott Edinger, has the "power to magnify so many other leadership strengths." When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe blogs.hbr.org "Influence becomes ineffective when individuals become so focused on the desired outcome that they fail to fully consider the situation," say Chris Musselwhite and Tammie Plouffe. BPM in Retail Industry | Sanjeev Sharma blogs.oracle.com Sanjeev Sharma shares links to a pair of blog posts that address common BPM use-cases in the Retail industry. Oracle VM: What if you have just 1 HDD system | Yury Velikanov www.pythian.com "To start playing with Oracle VM v3 you need to configure some storage to be used for new VM hosts," says Yury Velikanov. He shows you how in this post. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra

    Read the article

  • Retrieve a command from another, remote bash session

    - by Oli
    So I was on our laptop, SSH'd into my desktop, dropping some mad bashfu skill. There was one command I ran which was particularly skilful. I'm now about minute walk from the laptop and I really want that command here on my desktop, so that I can run it again. I realise I've already spent more time that it would have taken to rewrite it, but this has raised a common issue I have with bash history. I know I can force it to update each command, but I haven't... so: Is there any way to get a history from a different, live bash session?

    Read the article

  • Obtaining In game Warcraft III data to program standalone AI

    - by Slav
    I am implementing common purpose behavioral algorithm and would like to test it under my lovely Warcraft III game and watch how it will fight against real players. The problem is how to obtain information about in game state (units, structures, environment, etc. ). Algorithm needs access to hard drive and possibly distributed computing, that's why JASS (WC3 Editor language) usage doesn't solve the issue. Direct 3D hooking is an approach, but it wasn't done for WC3 yet and significant drawback is inability to watch online at how AI performs since it uses the viewport to issue commands. How in game data can be obtained to a different process in a fastest and easiest way? Thank you.

    Read the article

  • Does software rot refer primarily to performance, or to messy code?

    - by Kazark
    Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • How to hide (in Thunar and Nautilus) a directory without putting a dot in its name?

    - by Ivan
    Usually Linux programs store user's settings in ~/.* directories. But unfortunately some developers (of some applications I need) do not follow this rule and don't start their settings storage folders names with a dot. This results in never-user-used folders cluttering (not the right word perhaps, as there are not many, but they annoy anyway) a home directory. Renaming them is not an option, as the applications won't find them in this case (and will create them again). Is there a way to hide a folder having no dot starting its name from being displayed in common file system browsers (I actually use Thunar of XFCE, alongside with Midnight Commander and Krusader, but wouldn't mind to know about Nautilus too).

    Read the article

  • Dealing with a developer continuously ignoring edge cases in his work

    - by Alex N.
    I have an interesting, fairly common I guess, issue with one of the developers in my team. The guy is a great developer, work fast and productive, produces fairly good quality code and all. Good engineer. But there is a problem with him - very often he fails to address edge cases in his code. We spoke with him about it many times and he is trying but I guess he just doesn't think this way. So what ends up happening is that QA would find plenty issues with his code and return it back for development again and again, ultimately resulting in missed deadlines and everyone in the team unhappy. I don't know what to do with him and how to help him overcome this problem. Perhaps someone with more experience could advise? Thank you!

    Read the article

  • Can't decide between Java or Python for college [on hold]

    - by Will Harrison
    I'm returning to college in about a month for Computer Science. My problem is, I have been programming on the web since I left (4 years ago), using PHP, ASP.NET, and JavaScript. I want to bone up on a more general purpose language that is cross platform before I begin. I would also like to be using a language that is common at my school and I know that they teach the students C++, Java, and Python. I would like to choose between Java or Python but I'm not sure which one. What do you think would be better based on job prospects in the next 2 years and community?

    Read the article

  • Stored procedure Naming conventions?

    - by Chris
    One of our senior developers has stated that we should be using a naming convention for stored procedures with an "objectVerb" style of naming such as ("MemberGetById") instead of a "verbObject" type of naming ("GetMemberByID"). The reasoning for this standard is that all related stored procedures would be grouped together by object rather than by the action. While I see the logic for this way of naming things, this is the first time that I have seen stored procedures named this way. My opinion of the naming convention is that the name can not be read naturally and takes some time to determine what the words are saying and what the procedure might do. What are your takes on this? Which way is the more common way of naming a stored proc, and does a what types of stored proc naming conventions have you used or go by?

    Read the article

  • Java 7 Adoption at 79%

    - by Henrik Stahl
    According to a recent blog post from the cloud hosting company Jelastic, Java 7 adoption on their platform is now at 79%. While this is a single data point and should not be read too broadly, it does match other indicators we have that Java 7 is picking up, such as uptake among Oracle middleware customers, download statistics and online activity. The spike in adoption in April coincided with the release of JDK 7 Update 4. This is in line with our expectations since that release added Mac OS X support as well as java.com moving to Java 7 as the default download for end-users; two events that marked the maturity of Java 7 to the community. Since the original release of Java 7, Oracle has shipped 7 update releases, added ports to Mac OSX and Linux/ARM and expanded JavaFX to all common desktop platforms.

    Read the article

  • Cross platform mobile development VS Native Mobile Development: Present And Future.

    - by MobileDev123
    I just completed one year in Smart phone development, working on BlackBerry and Android and also developed one application exclusively targeted to nokia feature phones. And just a month ago I come to know about Titanium Appcelerator tool that enables cross platform development, but there are some developers who complain about it's sub-par functionalities. Even a little bit experience of mine says that developing in native environment rather than these cross platform tools will give you more advantages by giving a developer a chance to add more features with better performance. Do you have same experience? Or you find such cross development tools really useful regarding to advance functionality and performance? As porting (or co developing) same application to different mobile platform is common thing nowadays, what do you think will these cross platform tools evolve and force developers to get a hands on approach on them or majority will stick to the native development environment?

    Read the article

  • IL and case-sensitivity

    - by Ali .NET
    Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET CLS stands for Common Language Specifications. It is a subset of CTS. CLS is a set of rules or guidelines which if followed ensures that code written in one .NET language can be used by another .NET language. For example one rule is that we cannot have member functions with same name with case difference only i.e we should not have add() and Add(). This may work in C# because it is case-sensitive but if try to use that C# code in VB.NET, it is not possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?

    Read the article

  • How can a student programmer improve his teamwork skill?

    - by xiao
    I am a student right now. Recently, I am working in a project as a leader with three other students. Due to the lack of experience, our project is progressing slowly and our members are frustrated. They do not feel sense of accomplishment in the project. I am pressured and frustrated, too. But as a team leader, I think I need to push them. But I do not know how to do. Do I help them solve coding problem or just encouragement? But if I pay too much attention on it, it would slow down my own progress. It is a not technical question, but it is very common in software development. I hope veteran programmers would give me some suggestions. Thanks!

    Read the article

  • Internal Mic Ubuntu 12.04 on Compaq Presario CQ50

    - by Genius FDE
    I have problem with the internal microphone not working on Ubuntu 12.04. It looks like a common problem on most of the laptops running ubuntu 12.04. I tried a lot of possible solutions available online but nothing seems to work! sudo aplay -l gives: **** List of PLAYBACK Hardware Devices **** card 0: NVidia [HDA NVidia], device 0: CONEXANT Analog [CONEXANT Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 By the way, the Laptop has no HDMI port! I've seen the same problem with Sony laptop but the mic was working with too much static noise. Thank you.

    Read the article

  • Cyclic Dependencies.

    - by PhilCK
    Are cyclic dependencies a common thing in games dev? I ask as I keep getting into situation where I'm using and have been told more than once that they should be avoided. I am wondering if this is just a what people say as a general rule of thumb in the software development business. and that the nature of game programming produces such dependencies. // Foo #include <Bar.hpp> class Foo { bar& m_bar; }; and // Bar class Foo; class Bar { Foo* m_foo; }; I do this alot in Ruby, but dynamic languages are more forgiving in this instance, where as static ones, not so much.

    Read the article

  • *Code owner* system: is it an efficient way?

    - by sergzach
    There is a new developer in our team. An agile methodology is in use at our company. But the developer has another experience: he considers that particular parts of the code must be assigned to particular developers. So if one developer had created a program procedure or module it would be considered normal that all changes of the procedure/module would be made by him only. On the plus side, supposedly with the proposed approach we save common development time, because each developer knows his part of the code well and makes fixes fast. The downside is that developers don't know the system entirely. Do you think the approach will work well for a medium size system (development of a social network site)?

    Read the article

  • Using Java classes in JRuby

    - by kerry
    I needed to do some reflection today so I fired up interactive JRuby (jirb) to inspect a jar.  Surprisingly, I couldn’t remember exactly how to use Java classes in JRuby.  So I did some searching on the internet and found this to be a common question with many answers.  So I figure I will document it here in case I forget how in the future. Add it’s folder to the load path, require it, then use it! $: << '/path/to/my/jars/' require 'myjar' # so we don't have to reference it absolutely every time (optional) include Java::com.goodercode my_object = SomeClass.new

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • "AND Operator" in PAM

    - by d_inevitable
    I need to prevent users from authenticating through Kerberos when the encrypted /home/users has not yet been mounted. (This is to avoid corrupting the ecryptfs mountpoint) Currently I have these lines in /etc/pam.d/common-auth: auth required pam_group.so use_first_pass auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 try_first_pass auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass I am planning to use pam_exec.so to execute a script that will exit 1 if the ecyptfs mounts are not ready yet. Doing this: auth required pam_exec.so /etc/security/check_ecryptfs will lock me out for good if ecryptfs for some reason fails. In such case I would like to at least be able to login with a local (non-kerberos) user to fix the issue. Is there some sort of AND-Operator in which I can say that login through kerberos+ldap is only sufficient if both kerberos authentication and the ecryptfs mount has succeeded?

    Read the article

  • Has RFC2324 been implemented?

    - by anthony-arnold
    I know RFC2324 was an April Fools joke. However, it seems pretty well thought out, and after reading it I figured it wouldn't be out of the question to design an automated coffee machine that used this extension to HTTP. We programmers love to reference this RFC when arguing web standards ("418 I'm a Teapot lolz!") but the joke's kind of on us. Ubiquitous computing research assumes that network-connected coffee machines are probably going to be quite common in the future, along with Internet-connected fruit and just about everything else. Has anyone actually implemented a coffee machine that is controlled via HTCPCP? Not necessarily commercial, but hacked together in a garage, maybe? I'm not talking about just a web server that responds to HTCPCP requests; I mean a real coffee machine that actually makes coffee. I haven't seen an example of that.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Code sample after Job offer?

    - by mdominick
    I was verbally offered a job and the manager insisted that I start the day after the following day from the interview; so two days after the interview. I left the interview unsure of the offer the manager called me later that day and I agreed to take the position. At this point, I was told that I would get an offer letter the following day and would start the day after that. Later that evening I was asked for a code sample. I have yet to receive the offer letter the business day is about to close. I've been mostly contracting and usually answer technical questions or show samples at the beginning of the process and find this situation somewhat odd. Is this a common practice? Should I call the manager before business closes?

    Read the article

  • TFS: Work Items values from External Databases

    - by javarg
    A common question in TFS forums is how to populate list items from external sources in Work Items. Well, there is not a specific functionality to integrate Work Items with external databases or systems when designing them. Actually, you will need to associate your Work Items fields with Global Lists and then have some automated process update this global list regularly. Download this ImportGlobalList.zip file. I’ve put together a simple class (TfsGlobalList) that you can use to update global list items from a .NET application. You could for example, create a simple Console App and schedule it using Windows Scheduler. This App would query a database and then update a TFS Global List using the provided code. Note: the provided code must be run under an account with modify Global List permissions in TFS. Note: remember to refresh Team Explorer in order to see updates in Work Item field values. Enjoy!  

    Read the article

  • Health problem of a programmer

    - by gunbuster363
    Hi all, I've been annoyed by this fingers ache for quite a long time, my fingers ache because of too much mouse clicking during office hour plus play games after work. I forget game for a while and my fingers are getting better, but still my right pointing finger would feel pressure when I click the mouse. I haven't go to a doctor because I afraid the fee would be high and he would just suggest me too get rest for the fingers, also, I don't know what kind of doctor should I go and see. My fingers get less pressure if I use my expensive deathadder ( what a shame, I bought this for gaming, but now I use it for rest ) at home because its buttons are softer, however I cannot have such expensive mouse at my office because I am afraid people would steal it. I use some trick when I am using the mouse such as single-click open a file, adding more shortcuts at desktop for common jobs, do you guys have some other tips for me? Thank you.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >