Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 354/1758 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • How to choose the right PHP Framework for web development?

    - by liliwang
    Hi! I'm a PHP Pro, but haven't use any PHP framework till now, so I have no clue on how to choose a PHP framework. Do you have some tips to help me choose the/a right PHP Framework? I want a stable and secure PHP framework for Projects with about 400 hours development time. It should be possible to use the framework on Shared-Hosting-Webservers. I don't need some AJAX support (I'm using extJS). It would be nice if the framework supports Rapid Application Development and object-relational mapping. Also some of the standard-functions (Authentification, form validation) would be nice. Caching would be a useful, but isn't needed. Needs for a PHP framework: Shared-Hosting-Webserver-Support for Projects between 200 und 400 hours work Developing Modell "Rapid Application Development" supported object-relational mapping supported If possible: Caching Already finished Modules (e.g. Authentification, form validation, ..) Easy to learn Which PHP framework is the right one I am seeking for?

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • SpringBatch Jaxb2Marshaller: different name of class and xml attribute

    - by user588961
    I try to read an xml file as input for spring batch: Java Class: package de.example.schema.processes.standardprocess; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "Process", namespace = "http://schema.example.de/processes/process", propOrder = { "input" }) public class Process implements Serializable { @XmlElement(namespace = "http://schema.example.de/processes/process") protected ProcessInput input; public ProcessInput getInput() { return input; } public void setInput(ProcessInput value) { this.input = value; } } SpringBatch dev-job.xml: <bean id="exampleReader" class="org.springframework.batch.item.xml.StaxEventItemReader" scope="step"> <property name="fragmentRootElementName" value="input" /> <property name="resource" value="file:#{jobParameters['dateiname']}" /> <property name="unmarshaller" ref="jaxb2Marshaller" /> </bean> <bean id="jaxb2Marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>de.example.schema.processes.standardprocess.Process</value> <value>de.example.schema.processes.standardprocess.ProcessInput</value> ... </list> </property> </bean> Input file: <?xml version="1.0" encoding="UTF-8"?> <process:process xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:process="http://schema.example.de/processes/process"> <process:input> ... </process:input> </process:process> It fires the following exception: [javax.xml.bind.UnmarshalException: unexpected element (uri:"http://schema.example.de/processes/process", local:"input"). Expected elements are <<{http://schema.example.de/processes/process}processInput] at org.springframework.oxm.jaxb.JaxbUtils.convertJaxbException(JaxbUtils.java:92) at org.springframework.oxm.jaxb.AbstractJaxbMarshaller.convertJaxbException(AbstractJaxbMarshaller.java:143) at org.springframework.oxm.jaxb.Jaxb2Marshaller.unmarshal(Jaxb2Marshaller.java:428) If I change to in xml it work's fine. Unfortunately I can change neither the xml nor the java class. Is there a possibility to make Jaxb2Marshaller map the element 'input' to the class 'ProcessInput'?

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • ??Database Replay Capture????

    - by Liu Maclean(???)
    Database Replay?11g??????,??workload capture??????????????,???????? ??Workload Capture???????: ???????????????,???????2????,??????,???????????OLTP???????capture 10????1G???? ?????: ????????????????????? ??startup restrict????,?????????? ??capture???restrict?? ????????????? ???????????????: ??scn???????? ???????? ???????? Capture???????????workload????? ???????SYSDBA?SYSOPER????OS?? ????: ?TPCC???capture??????4.5% ????session????64KB??? ???Workload Capture?????????? ????????2?, ??RAC????workload capture  file??????????????,??start_capture????? ????session????64KB???,??????????????workload  capture file????Server Process??????,?????????parse???execution????,Server Process??LOGON?LOGOFF?SQL??????????PGA?,???WCR Capture PG?WCR Capture PGA?,?PGA?????????????????,Server Process???????????WCR???,?????WCR???Server Process??’WCR: capture file IO write’????? ?WCR?????????: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select name from v$event_name where name like '%WCR%'; NAME ---------------------------------------------------------------- WCR: replay client notify WCR: replay clock WCR: replay lock order WCR: replay paused WCR: RAC message context busy WCR: capture file IO write WCR: Sync context busy latch: WCR: sync latch: WCR: processes HT 11g????????WCR???LATCH 1* select name,gets from v$latch where name like '%WCR%' SQL> / NAME GETS ------------------------------ ---------- WCR: kecu cas mem 3 WCR: kecr File Count 37 WCR: MMON Create dir 1 WCR: ticker cache 0 WCR: sync 495 WCR: processes HT 0 WCR: MTS VC queue 0 7 rows selected. ????????????Database Replay Capture????? 1. ????capture dbms_workload_capture.start_capture CREATE OR REPLACE DIRECTORY dbcapture AS '/home/oracle/dbcapture'; execute dbms_workload_capture.start_capture('CAPTURE','DBCAPTURE',default_action=>'INCLUDE'); SQL> select id,name,status,start_time,end_time,connects,user_calls,dir_path from dba_workload_captures where id = (select max(id) from dba_workload_captures) ; ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 1 CAPTURE IN PROGRESS 08-DEC-12 11 ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 167 /home/oracle/dbcapture 2. ?? capture file?? [oracle@mlab2 dbcapture]$ ls -lR .: total 8 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 cap drwxr-xr-x 3 oracle oinstall 4096 Dec 8 07:24 capfiles -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:24 wcr_cap_00001.start ./cap: total 4 -rw-r--r-- 1 oracle oinstall 91 Dec 8 07:24 wcr_scapture.wmd ./capfiles: total 4 drwxr-xr-x 12 oracle oinstall 4096 Dec 8 07:24 inst1 ./capfiles/inst1: total 40 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 08:31 aa drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ab drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ac drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ad drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ae drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 af drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ag drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ah drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ai drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 aj ./capfiles/inst1/aa: total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec ./capfiles/inst1/ab: total 0 ./capfiles/inst1/ac: total 0 ./capfiles/inst1/ad: total 0 ./capfiles/inst1/ae: total 0 ./capfiles/inst1/af: total 0 ./capfiles/inst1/ag: total 0 ./capfiles/inst1/ah: total 0 ./capfiles/inst1/ai: total 0 ./capfiles/inst1/aj: total 0 [oracle@mlab2 dbcapture]$ cd ./capfiles/inst1/aa [oracle@mlab2 aa]$ ls -l total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec [oracle@mlab2 aa]$ ls -l |wc -l 14 ???????14??? 3. ??LOGON????Server Process [oracle@mlab2 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 8 08:37:40 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options ?????wcr?? [oracle@mlab2 aa]$ ls -ltr total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:37 wcr_c6hp4h0000018.rec ??????wcr_c6hp4h0000018.rec ??? SQL> select spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); SPID ------------------------ 14293 ????????????????14293, ???????????????,??????wcr_c6hp4h0000018.rec [oracle@mlab2 ~]$ ls -l /proc/14293/fd total 0 lr-x------ 1 oracle oinstall 64 Dec 8 08:39 0 -> /dev/null l-wx------ 1 oracle oinstall 64 Dec 8 08:39 1 -> /dev/null lrwx------ 1 oracle oinstall 64 Dec 8 08:39 10 -> /u01/app/oracle/product/11201/db_1/rdbms/audit/CRMV_ora_14293_1.aud l-wx------ 1 oracle oinstall 64 Dec 8 08:39 11 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc l-wx------ 1 oracle oinstall 64 Dec 8 08:39 12 -> pipe:[34585895] l-wx------ 1 oracle oinstall 64 Dec 8 08:39 13 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trm l-wx------ 1 oracle oinstall 64 Dec 8 08:39 2 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 3 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 4 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 5 -> /u01/app/oracle/product/11201/db_1/rdbms/mesg/oraus.msb lr-x------ 1 oracle oinstall 64 Dec 8 08:39 6 -> /proc/14293/fd lr-x------ 1 oracle oinstall 64 Dec 8 08:39 7 -> /dev/zero lrwx------ 1 oracle oinstall 64 Dec 8 08:39 8 -> /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec lr-x------ 1 oracle oinstall 64 Dec 8 08:39 9 -> pipe:[34585894] ?????lsof?? [root@mlab2 ~]# lsof|grep wcr_c6hp4h0000018.rec oracle 14293 oracle 8u REG 8,1 0 17629644 /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec ????????,??Server Process????WCR REC??,?Server Process LOGON?????? 3.????SQL??: SQL> select 1 from dual; 1 ---------- 1 SQL> / 1 ---------- 1 [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec ==»????SQL????, ??????? ??????SQL???,???????????????WCR??????,LOGON???????????SQL????,????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 11.2.0.3.0 *File header info. (Shadow process='14293') D0576B5D710A34F4E043B201A8C0ECFE SYS; NLS_LANGUAGE? AMERICAN> NLS_TERRITORY? AMERICA> NLS_CURRENCY? NLS_ISO_CURRENCY? AMERICA> NLS_NUMERIC_CHARACTERS? NLS_CALENDAR? GREGORIAN> NLS_DATE_FORMAT? DD-MON-RR> NLS_DATE_LANGUAGE? AMERICAN> NLS_CHARACTERSET? AL32UTF8> NLS_SORT? BINARY> NLS_TIME_FORMAT? HH.MI.SSXFF AM> NLS_TIMESTAMP_FORMAT? DD-MON-RR HH.MI.SSXFF AM> NLS_TIME_TZ_FORMAT? HH.MI.SSXFF AM TZR> NLS_TIMESTAMP_TZ_FORMAT? DD-MON-RR HH.MI.SSXFF AM TZR> NLS_DUAL_CURRENCY? NLS_SPECIAL_CHARS? NLS_NCHAR_CHARACTERSET? UTF8> NLS_COMP? BINARY> NLS_LENGTH_SEMANTICS? BYTE> NLS_NCHAR_CONV_EXCP? FALSE (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/11201/db_1/bin/oracle)(ARGV0=oracleCRMV)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=sqlplus)(HOST=mlab2.oracle.com)(USER=oracle)))) ,[email protected] (TNS V1-V3)U tselect spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)) ` _ select 1 from dual select 1 from dual ??????????????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 9`9_^B create table vva(t1 int) `:_i :`:_iB `;_^ ;`;_^B create table vva(t1 int) `_i >`>_iB FusC `?_^ ?`?_^B FvWC _begin for i in 1..50000 loop execute immediate 'select 1 from dual where 2='||i; end loop; end; ?SERVER PROCESS LOGOFF ??????? C`E_ B k^2C ????Server Process????parse?execution???WCR??,??????????PGA?,????????????,????????,?????WCR???????????,???????? 4. ?????? SQL> oradebug setmypid Statement processed. SQL> oradebug dump processstate 10; Statement processed. SQL> oradebug tracefile_name /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc ?processstate ??????????????? WCR: capture file IO write,??Server process??WCR ?? 3: waited for 'SQL*Net message to client' driver id=0x62657100, #bytes=0x1, =0x0 wait_id=139 seq_num=140 snap_id=1 wait times: snap=0.000007 sec, exc=0.000007 sec, total=0.000007 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 0.934091 sec of elapsed time 4: waited for 'latch: shared pool' address=0x60106b20, number=0x133, tries=0x0 wait_id=138 seq_num=139 snap_id=1 wait times: snap=0.000066 sec, exc=0.000066 sec, total=0.000066 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 1.180690 sec of elapsed time 5: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=137 seq_num=138 snap_id=1 wait times: snap=0.000189 sec, exc=0.000189 sec, total=0.000189 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.122783 sec of elapsed time 6: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=136 seq_num=137 snap_id=1 wait times: snap=0.000191 sec, exc=0.000191 sec, total=0.000191 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.053132 sec of elapsed time 7: waited for 'WCR: capture file IO write' 5.??PGA???? SQL> oradebug dump heapdump 536870917; Statement processed. grep WCR /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6111e18 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6112e98 sz= 4184 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6113ef0 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6114f70 sz= 4104 recreate "WCR Capture PG " latch=(nil) Chunk 7fb1b6115f78 sz= 160 freeable "WCR Capture PGA" Chunk 7fb1b6116018 sz= 3248 freeable "WCR Capture PGA" Subheap ds=0x7fb1b6115f90 heap name= WCR Capture PG size= 82336 HEAP DUMP heap name="WCR Capture PG" desc=0x7fb1b6115f90 FIVE LARGEST SUB HEAPS for heap name="WCR Capture PG" desc=0x7fb1b6115f9 PGA???WCR Capture PG ?WCR Capture PGA?freeable or recreate??chunk,???????Server Process???OS Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 sz= 65600=» 64k ??????????64k??,???????????????64k WCR????????????:)! 6.???? ??WCR CAPTURE????????2? SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm in ('_capture_buffer_size','_wcr_control'); NAME VALUE DESCRIB -------------------- -------------------- ------------------------------------------------------------ _wcr_control 0 Oracle internal test WCR parameter used ONLY for testing! _capture_buffer_size 65536 To set the size of the PGA I/O recording buffers ??_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k _wcr_control ??WCR?????,?????? ????,??????: 1. ???WCR WORKLOAD CAPTURE???????????,??Server Process????(????)2. ???server process????WCR??3. Server Proess???LOGON?LOGOFF?SQL?????????WCR???4. Server Process????????Immediate mode,????????PGA?(WCR Capture) subheap?,??????????????(timeout?????)5. ????, Server Process????????Immediate mode,?capture????parse??execution??(?????capture???parse?????????????,parse????capture???),?????LOGON?SQL??(???????)??PGA?WCR Capture?????,???????,????????,??tpcc??????4.5%6. ????_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k7. WCR Capture?????binrary 2????,?????,????????????????WCR capture file8. WCR: capture file IO write?????Server Process??WCR??

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • How can I tell the size of my app during development?

    - by Newbyman
    My programming decissions are directly related to how much room I have left, or worse perhaps how much I need to shave off in order to get up the 10mb limit. I have read that Apple has quietly increased the 3G & Edge download limit from 10mb up to 20mb in preparation for the iPad in April. Either way, my real question is how can I gauge a rough estimate of how large my app will end while I'm still in the development phase? Is the file size of my development folder roughly 1 to 1 ratio? Is the compressed file size of my development a better approximation? My .xcodeproj file is only a couple hundred kB, but the size of my folder is 11.8 MB. I have a .sqlite database, less than 20 small png images and a Settings.Bundle. The rest are unknown Xcode files related to build, build for iphoneOS, simulator etc.... My source code is rather large with around 1000 lines in most of the major controllers, all in all around 48 .h&.m files. But my classes folder inside my development folder is less than 800kb. Digging around inside my Build file, there is lots of iphone simulator files and debugging files which I don't think will contribute to the final product. The Application file states that it is around 2.3 MB. However, this is such a large difference from the 11.8 MB, I have to wonder if this is just another piece of the equation. I have the app on the my device, I'm in the testing phase. Therefore, I though that I would try to see how large the working version was on the device by checking in iTunes, however my development app is visible on the right-hand the application's iphone screen, but no information about the app most importantly its size. I also checked in Organizer, I used the lower portion of the screen-(Applications), found my application and selected the drop down arrow which gave my "Application Data" and a download arrow button to the right to save a file on my desktop, named with the unique AppleID. Inside the folder it had three folders-(documents, library, tmp) the documents had a copy of my .sqlite database, the library a few more files but not anything obvious or of size, and the tmp was empty. All in all the entire folder was only 164kb-which tells me that this is not the right place to find the size either. I understand that the size is considered to be the size of my binary plus all the additional files and images that I have add. Does anyone have a effective way of guaging how large the binary is or the relating the development folder size to what the final App Store application size will end up. I know that questions have been posted with similar aspects, but I could not find any answered post that really described...what files, or how to determine size specifically. I know that this question looks like a book, but I just wanted to be specific in conveying exactly what I'm looking for and the attempts thus far. *Note all files are unzipped and still in regular working Xcode order of a single app with no brought-in builds or referenced projects. I'm sure that this is straight forward, I just don't know where to look?

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • What functional language is most suited to create games with?

    - by Ricket
    I have had my eye on functional programming languages for a while, but am hesitating to actually get into them. But I think it's about time I at least starting glancing that direction to make sure I'm ready for anything. I've seen talk of Haskell, F#, Scala, and so on. But I have no clue the differences between the languages and their communities, nor do I particularly care; except in the context of game development. So, from a game development standpoint, which functional programming language has the most features suited for game programming? For example, are there any functional game development libraries/engines/frameworks or graphics engines for functional languages? Is there a language that handles certain data structures which are commonly used in game development better? Bottom line: what functional programming language is best for functional game programming, and why? I believe/hope this question will declare a clear best language therefore I haven't marked it CW despite its subjective tendency.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • With Monit, how do I restart a process when a directory timestamp check fails?

    - by Alterscape
    In my /etc/monit/monitrc I have the following lines: check process foo_server with pidfile /var/run/bwam_server.pid start program = "/Users/foo/foo_server.sh start" stop program = "/Users/foo/foo_server.sh stop" check directory foo_data path "/Users/foo/Library/Application Support/foo_server/data" if timestamp > 1 minute then alert #if timestamp > 1 minute then restart foo_server I know I shouldn't have some of this stuff in my home directory, but this aside: if I uncomment the last line, Monit tells me syntax error on foo_server -- but I am, as far as I understand, correctly defining the process -- how else do I reference it?

    Read the article

  • Is it possible to interrupt or cancel the trash-delete process?

    - by pete
    I am connected to the company server, onsite not remote, which is Windows based. When, in the OS finder, I go to delete a file or directory that resides on the window-based server, the trash progress panel appears as usual. Not infrequently it will just hang there appearing to be active but no progress occurs on the progress bar. So the X is chosen to stop or cancel the process. A new panel appears, and while appearing to be active, again, just hangs there indefinitely. My lack of reputation prevents posting a screen shot - sorry. The question then is this: Is there a way via the Activity Monitor, Terminal, etc. to interrupt this deleting process in the Mac OS? Currently, I have to force shutdown and reboot to eradicate the ever-deleting panel. Finder relaunch does not solve and only eliminates the finder until I force reboot. Any assistance is appreciated. Thanks, Pete iMac, Mac OS X (10.6.8), i7-2.8ghz, 8gb ram

    Read the article

  • Why does RunDLL32 process exit early on Windows 7?

    - by Vicky
    On Windows XP and Vista, I can run this code: STARTUPINFO si; PROCESS_INFORMATION pi; BOOL bResult = FALSE; ZeroMemory(&pi, sizeof(pi)); ZeroMemory(&si, sizeof(si)); si.cb = sizeof(STARTUPINFO); si.dwFlags = STARTF_USESHOWWINDOW; si.wShowWindow = SW_SHOW; bResult = CreateProcess(NULL, "rundll32.exe shell32.dll,Control_RunDLL modem.cpl", NULL, NULL, FALSE, NORMAL_PRIORITY_CLASS, NULL, NULL, &si, &pi); if (bResult) { WaitForSingleObject(pi.hProcess, INFINITE); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); } and it operates as I would expect, i.e. the WaitForSingleObject does not return until the Modem Control Panel window has been closed by the user. On Windows 7, the same code, WaitForSingleObject returns straight away (with a return code of 0 indicating that the object signalled the requested state). Similarly, if I take it to the command line, on XP and Vista I can run start /wait rundll32.exe shell32.dll,Control_RunDLL modem.cpl and it does not return control to the command prompt until the Control Panel window is closed, but on Windows 7 it returns immediately. Is this a change in RunDll32? I know MS made some changes to RunDll32 in Windows 7 for UAC, and it looks from these experiments as though one of those changes might have involved spawning an additional process to display the window, and allowing the originating process to exit. The only thing that makes me think this might not be the case is that using a process explorer that shows the creation and destruction of processes, I do not see anything additional being created beyond the called rundll32 process itself. Any other way I can solve this? I just don't want the function to return until the control panel window is closed.

    Read the article

  • change default port of IIS and let another process to listen on port 80 (Windows Server 2008)

    - by aleroot
    I have an installation of Windows Server 2008 running IIS 6 with a website listening on port 8080, even though I have moved the website to listen on 8080, port 80 is still kept in use by IIS (for truth by the kernel process : System - ProcId : 4). I want to let another process listen on port 80 without uninstalling or disabling IIS, I want to keep IIS listening on port 8080 and another service on port 80, is there a way to do it? I saw another similar thread here on serverfault but the solution (using httpcfg.exe delete iplisten -i 0.0.0.0:80 ) can work only in 2003 because in 2008 the utility httpcfg.exe doesn't exist and it seems that it cannot be installed ... Does anyone have a solution to get rid of the kernel listening on port 80 in Windows Server 2008 with IIS running?

    Read the article

  • How to troubleshoot down terminal service/CITRIX sessions, when the process just won't terminate?

    - by Chad
    Running a CITRIX presentation server farm, version 4.5.6 on Windows 2003 sp2. In the CITRIX Access Management Console, I sometimes get a session that shows it's in a down state - but has none of the normal info associated with it (user name, applications, client name, idle time, etc...). It does say which servers it's on, so I check out that server's terminal services manager. I can see the down session but cannot reset it. I get: (Error 7024 - the requested operation cannot be completed because the terminal connection is currently busy processing a connect, disconnect, reset, or delete operation.) So I go to the task manager and look up the processes running under that session id. I see it's one of my published apps, but when I try to end the process - it simply does nothing and the process remains. Any way to get rid of these sessions without a server reboot?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >