Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 217/637 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Implicit and Explicit implementations for Multiple Interface inheritance

    Following C#.NET demo explains you all the scenarios for implementation of Interface methods to classes. There are two ways you can implement a interface method to a class. 1. Implicit Implementation 2. Explicit Implementation. Please go though the sample. using System; namespace ImpExpTest {     class Program     {         static void Main(string[] args)         {             C o3 = new C();             Console.WriteLine(o3.fu());             I1 o1 = new C();             Console.WriteLine(o1.fu());             I2 o2 = new C();             Console.WriteLine(o2.fu());             var o4 = new C();       //var is considered as C             Console.WriteLine(o4.fu());             var o5 = (I1)new C();   //var is considered as I1             Console.WriteLine(o5.fu());             var o6 = (I2)new C();   //var is considered as I2             Console.WriteLine(o6.fu());             D o7 = new D();             Console.WriteLine(o7.fu());             I1 o8 = new D();             Console.WriteLine(o8.fu());             I2 o9 = new D();             Console.WriteLine(o9.fu());         }     }     interface I1     {         string fu();     }     interface I2     {         string fu();     }     class C : I1, I2     {         #region Imicitly Defined I1 Members         public string fu()         {             return "Hello C"         }         #endregion Imicitly Defined I1 Members         #region Explicitly Defined I1 Members         string I1.fu()         {             return "Hello from I1";         }         #endregion Explicitly Defined I1 Members         #region Explicitly Defined I2 Members         string I2.fu()         {             return "Hello from I2";         }         #endregion Explicitly Defined I2 Members     }     class D : C     {         #region Imicitly Defined I1 Members         public string fu()         {             return "Hello from D";         }         #endregion Imicitly Defined I1 Members     } } Output:- Hello C Hello from I1 Hello from I2 Hello C Hello from I1 Hello from I2 Hello from D Hello from I1 Hello from I2 span.fullpost {display:none;}

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • Good application for taking notes on 12.04?

    - by Ankit
    Is there a good note taking application in ubuntu like Evernote exists for windows and Mac users. Requirements for good applications:- A thumbnail/list preview for all the created notes. An integration with the email address so that we can view the notes anywhere anytime. Any easy way to input text, video, pdf, images etc. I have tried tomboy, basket notes. They aren't that good. Edit: Trying to install nevernote from an external source. But it has not satisfiable Further trying to install libssl gives the following error The following information may help to resolve the situation: The following packages have unmet dependencies: libssreflect-coq : Depends: coq-8.3pl3+3.12.1 but it is not installable libssreflect-ocaml : Depends: libcoq-ocaml-4zyg6 but it is not installable libssreflect-ocaml-dev : Depends: libcoq-ocaml-dev-4zyg6 but it is not installable E: Unable to correct problems, you have held broken packages. Anything that I might be doing wrong?

    Read the article

  • Comprehensive system for documentation and handoff of developer project

    - by Uzumaki Naruto
    I work on a technology team that typically develops projects for a period of time, and then hands off to other groups for long-term maintenance and improvements. My team currently uses ad hoc methods of handing off documentations, such as diagrams, API references, etc. Is there a open source solution (or even proprietary one) that enables us to manage: Infrastructure/architecture/software diagrams API documentation Directory structures/file structures Overall documentation summaries in one place? E.g., instead of using multiple systems like Swagger, Wikis, etc. - is there a solution that can seamlessly combine all of these? And enable us to generate a package including all 4 key items with one click to hand off to other teams.

    Read the article

  • SQL Source Control with Vault Support Officially Released

    - by Ajarn Mark Caldwell
    HOORAY!  It is officially here!  Today, Red-Gate officially released SQL Source Control version 2.1 with support for Vault. While we have been happily and successfully running the beta version (a.k.a. the Early Access release) of Red-Gate SQL Source Control with support for Vault for quite a while, it is good to have the official RTM (or GOLD, or PROD, or whatever you call your “no-longer-in-beta”) release of the product. As a courtesy to those who have not already read the series, allow me to provide you with these links to my previous posts about this fantastic tool. Using SQL Source Control with Fortress or Vault – Part 1 – Introduction and initial thoughts about the tool and source controlling SQL code in general. Using SQL Source Control with Fortress or Vault – Part 2 – Additional details about included features and a few warnings. Source Control and SQL Development – Part 3 – How we did it in the good ol’ days before this product came along. Using SQL Source Control and Vault Professional Part 4 – A few closing thoughts.

    Read the article

  • Don't Call it a Comeback

    - by Chris Haaker
    I received the email like most of you about Jeff and crew stepping down and selling the blog to another company. That it is a long time associate and friend of the team we have all grown to know and love, I feel much better about the move. Who cares, Chris, you haven't blogged religiously in ages! I know, and its a crime. Blame life, Twitter, my kids, laziness or whatever else you can think of. I always tell myself I am going to make a comeback - - "Don't call it a comeback - I been here for years." But after a few posts I seem to lose my steam. Its hard to explain, hell, I can't explain it. But we'll see what happens this time. Just don't call it a comeback.  2012 rMBP 15" Quad Core 2.33 GHz 16GB Memory 258GB SSDMarsEdit 3.5 (Please Microsoft Live Team - Make LiveWriter for OS X)

    Read the article

  • DotNetNuke 5.4.1 Released

    I am happy to announce the release of DotNetNuke 5.4.1 which corrects the major issues which slipped through the QA process for 5.4. While we try to do a good job in testing our releases, our recent efforts for 5.3 and 5.4 have fallen short of the mark. We are currently working with a small team of commercial module developers and the core team to put a better public beta testing process in place that will help augment our own internal testing. Ultimately, community testing is the only testing that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • Handling SQL Server Errors

    This article covers the basics of TRY CATCH error handling in T-SQL introduced in SQL Server 2005. It includes the usage of common functions to return information about the error and using the TRY CATCH block in stored procedures and transactions.

    Read the article

  • [News] ASP.NET V4 proposera des outils SEO pour le r?f?rencement

    Dans son dernier billet, Scott Guthrie pr?sente les nouveaut?s de ASP.NET V4 concernant la gestion du r?f?rencement, un sujet souvent d?licat dans les framework web modernes. Il ?voque de nouvelles propri?t?s de r?f?rencement permettant de g?n?rer des tags m?ta, un nouveau proc?d? de routage inter-pages et un Toolkit sp?cialis?.

    Read the article

  • Dump debugging with Parallel Stacks

    One of the areas where we fixed many bugs for Beta2 in our parallel debugging windows is with regards to managed dump debugging. So it was really cool to see Tess use the Parallel Stacks window in that scenario in her video demo with Scott.Other than the neat ability to open managed dumps in VS2010, Parallel Stacks was the only debugging feature she needed for diagnosing the issue! Check out the video, definitely worth 10 minutes of your time. Comments about this post welcome at the original blog.

    Read the article

  • SQL Saturday #146 : Nashua, NH

    - by AaronBertrand
    Today was SQL Saturday #146, put on by Mike Walsh, Jack Corbett, and a host of other volunteers and organizers. Scott and I missed the speaker dinner last night, but we headed up from Rhode Island at 6:00 AM and made a good day of it. We had lots of great conversations with both existing friends and potential customers. After lunch I participated in a panel discussion with Joey D'Antoni and Andrew Kelly, led my Mike. We basically talked about various things DBAs are responsible for - and ultimately...(read more)

    Read the article

  • New Whitepaper: Deploying E-Business Suite on Exadata and Exalogic

    - by Elke Phelps (Oracle Development)
    Our E-Business Suite Performance Team recently published a new whitepaper to assist you with deploying E-Business Suite on the Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine , also referred to as Exastack.  If you are considering a migration to Exastack, this new whitepaper will assist you understanding sizing requirements, deployment standards and migration strategies: Deploying Oracle E-Business Suite on Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine (Note 1460742.1) This whitepaper covers the following topics: Scalability and Sizing Examples - provides performance benchmark analysis with concurrent user counts, scaling analysis and sizing recommendations Deployment Standards - includes recommendations for deploying the various components of the E-Business Suite architecture on Exastack Migration Standards and Guidelines - includes an overview of methods for migrating from commodity hardware to Exastack References Our Maximum Availability Architecture (MAA) team has a number of whitepapers that provide additional information regarding Oracle E-Business Suite on the Oracle Exadata Database Machine.  Their library of whitepapers may be found here: MAA Best Practices - Oracle Applications Unlimited  Related Articles Running E-Business Suite on Exadata V2 Running Oracle E-Business Suite on Exalogic Elastic Cloud

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Collaboration platforms

    - by Thomas
    Are there any good collaboration platforms for game development? This would include the following features: Easy way to find various people you need to build games (programmer, artist etc) and forming a team like for example codeplex Online portfolio for users where they can offer their services (either paid or free) Posibility to create a game specific blog or site with social media integration to show the world what's being created Easy way to manage game content / resources with sufficient online storage, version control and if possible source control Manage all phases of game development (startup, creating concept, finding a team, creating proof of concept, production phase etc) and publish specific information for each phase also on social media etc. Manage asset creation flow (request for specific content like a sound, production of sound, uploading the sound, notification to the requester, implementation of the file, retouching in several cycles etc)

    Read the article

  • What to do with my "unmounted drive"?

    - by Taylor Guistwite
    I just recently followed the tutorials on http://www.ubuntu.com/download/ubuntu/download for installing the ubuntu server onto my 1TB Seagate External. I was planning on using this to install it on my macbook and in these instructions it states to preform this line of code Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2) Now my HD prompts "The disk you inserted was not readable by this computer". Would I just run diskutil mountDisk /dev/diskN in order to be able to access all my files again? here is a screenshot to the instructions i followed http://i17.photobucket.com/albums/b97/hello_screamo/Screenshot2011-11-11at113914AM.png

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • Corticon provides Business Rules Engines for Silverlight, WCF and .NET developers

    Now Corticon Business Rules Engines and Business Rules Management Systems users can enjoy support for the Windows 7 operating system, and for Silverlight and Windows Communication Foundation developers. The new Corticon 4.3 provides numerous performance, usability, and integration enhancements and provides the industry-first cloud deployment option for a business rules engine. span.fullpost {display:none;}

    Read the article

  • Help make the next Summit even better

    - by Bill Graziano
    After the Summit we send out a survey to capture feedback.  We ask a consistent set of questions so we get good year over year results.  I’ve watched blog posts and email threads with ideas for a better Summit.  I got to sit with Denny and crew again on Saturday night and talk about what worked and what didn’t.  We’d like to capture those ideas in a way that you can vote on what’s important to you.  Please take a second and visit http://feedback.sqlpass.org/.  You can make suggestions, vote on the ideas already posted and add your own comments.  Help PASS make next year’s Summit “The Best Summit Ever!”

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Join me to register for the Summit

    - by Bill Graziano
    This year the Summit registration opens at 6PM on Sunday at the Seattle convention center.  Last year we had a dozen people hanging out, watching the twitter feed on the big monitor and catching up.  All we really needed was a bar and we’d have our own little party going. So this year I’m adding a bar.  I’ve arranged for a cash bar and some stand up tables.  I’m buying the first round for the first 40 or so people that come by.  Come by, register and say Hi.  I’d especially like to encourage first-time attendees to stop by.  This is a low key way to meet some people that will be at the conference.

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >