Search Results

Search found 97411 results on 3897 pages for 'code analysis tool'.

Page 434/3897 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • Is it a good idea to add robots "noindex" m tags deep, low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • Symbolic Regular Expression Exploration

    - by Robz / Fervent Coder
    This is a pretty sweet little tool. Rex (Regular Expression Exploration) is a tool that allows you to give it a regular expression and it returns matching strings. The example below creates10 strings that start and end with a number and have at least 2 characters: > rex.exe "^\d.*\d$" /k:10 This is something I could use to validate/generate the Regular Expressions I have created with both UppercuT and RoundhousE. Check out the video below: Margus Veanes - Rex - Symbolic Regular Expression Exploration Margus Veanes, a Researcher from the RiSE group at Microsoft Research, gives an overview of Rex, a tool that generates matching string from .NET regular expressions. Rex turns regular expres...

    Read the article

  • What is PowerPivot?

    - by Enrique Lima
    Let’s start with it is a great way to be able to visualize data and transform to information.  It is intended to be a self-service business intelligence tool provided as an add-in for a tool we already know … Microsoft Excel ... 2010. If you have been wondering where to go? what to do? and how do I? I am including links that will help you be on your way to better understanding of the topic and tool. (Links to TechNet documentation) Introduction to PowerPivot Walkthrough: Create your first PowerPivot Workbook Get to know the UI PowerPivot for Excel Central Get some samples

    Read the article

  • Camtasia Studio

    - by Andy Morrison
    Have you heard about this tool? This is a capture tool that records your actions as you perform them on your workstation, remote desktop, etc.  It's made by the same company that makes SnagIt, etc. http://www.techsmith.com/camtasia.asp I have used this at several customers to record our install and configurations of BizTalk and more - this can come in very handy to help reduce differences in environments (because you can go back and review exactly what you did in the previous environment) as well as to supplement your installation docs.   The product also includes an editing studio and various media types for export. I haven't been paid to write this - I just think the tool is very nice.

    Read the article

  • SharePoint, HTTP Modules, and Page Validation

    - by Damon Armstrong
    Sometimes I really believe that SharePoint actively thwarts my attempts to get it to do what I want.  First you look at something and say, wow, that should work.  Then you realize it doesn’t.  Then you have an epiphany and see a workaround.  And when you almost have that work around working… well then SharePoint says no again.  Then it’s off on another whirl-wind adventure to find a work around for the workaround.  I had one of those issues today, but I think I finally got past the last roadblock. So, I was writing an HTTP module as a workaround for another problem.  Everything looked like it was working great because I had been slowly adding code into the HTTP module bit by bit in a prototyping effort.  Finally I put in the last bit of code in place… and I started to get an error: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” This is not an uncommon error – it normally occurs when you are updating an item on a GET request and you have not marked the web containing the item with AllowUnsafeUpdates.  One issue, however, is that I wasn’t updating anything in my code.  I was, however, getting an SPWeb object so I decided to set the AllowUnsafeUpdates property on it to true for good measure. Once that was in place, I ran it again… “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.” WTF?!?!  I really expected that setting the AllowUnsafeUpdates property on the SPWeb would fix the issue, but clearly that was not the case.  I have had occasion to disassemble some SharePoint code with .NET Reflector in the past, and one of the things SharePoint abuses a bit more than it should is the HttpContext.  One way to avoid this abuse is to clear out the HttpContext while your code runs and then set it back once you are done.  I tried this next, and everything worked out just like I had expected.  So, if you are building an HTTP Module for SharePoint and some code that you are running ends up giving you a security validation error, remember to try running that code with AllowUnsafeUpdates turned on and try running the code with the HttpContext nulled out (just remember to set it back after your code runs or else you’ll really jack things up).

    Read the article

  • Gnome extensions stay in the list after being removed

    - by SingerOfTheFall
    I've got a little issue with gnome shell extensions. After installing some of them, I understood I didn't like them and decided to remove them. The extensions themselves (their folders in /home/username/.local/share/gnome-shell/extensions) were deleted successfully. However, the deleted extensions were not removed from the list of installed extensions at extensions.gnome.org. They also were not removed from the list in gnome-tweak-tool. So now in my list I have a bunch of extensions that I have already deleted. The funny thing is that I can't reinstall them too, since both the gnome-tweak-tool and the website think they are still there. This isn't a big deal of course, but I find it to be a little annoying. Reinstalling gnome-tweak-tool didn't help. Is there a way to somehow update the status of installed extensions?

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • Benchmark for website speed optimization

    - by gowri
    I working on website speed optimization. I mostly used 3 tools for analyzing speed of optimization. Speed analyzing Tools: Google pagespeed tool Yslow Firefox extenstion Web Page Performance Test I am measuring performance using above tool and benchmark result as below like before and after. Before optimization : Google PageSpeed Insights score : 53/100 Web Page Performance Test : 55/100 (First View : 10.710s, Repeat view : 6.387s ) Yahoo Overall performance score : 68 Stage 1 After optimization : Google PageSpeed Insights score : 88/100 Web Page Performance Test : 88/100 (First View : 6.733s, Repeat view : 1.908s ) Yahoo Overall performance score : 80 My question is ? Am i doing correct way ? What is the best way of benchmark for speed optimization ? Is there any standard ? Is there any much better tool for analyzing speed ?

    Read the article

  • Testing loses its effectiveness if all programmers don't use them

    - by Jeff O
    Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well. There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

    Read the article

  • Multiple vulnerabilities in Mozilla Firefox

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2372 Permissions, Privileges, and Access Controls vulnerability 3.5 Firefox web browser Solaris 11 11/11 SRU 3 Solaris 10 Contact Support CVE-2011-2995 Denial of Service (DoS) vulnerability 10.0 CVE-2011-2997 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3000 Improper Control of Generation of Code ('Code Injection') vulnerability 4.3 CVE-2011-3001 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3002 Denial of Service (DoS) vulnerability 9.3 CVE-2011-3003 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3004 Improper Input Validation vulnerability 4.3 CVE-2011-3005 Denial of Service (DoS) vulnerability 9.3 CVE-2011-3232 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 CVE-2011-3648 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2011-3650 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 9.3 CVE-2011-3651 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3652 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3654 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3655 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How to credit other authors in an open source project

    - by erik
    I have a pet project that I am planning to release as open source at some point in the not-too-distant future. A couple of the files use or are mostly code that was taken from a project released under the New BSD License. While I have changed it to fit my needs and added some small stuff, the algorithm and the functionality is basically exactly the same. I want to make sure that the author of the code gets credit and that the license is not broken, but I also want to make the reader aware that this is not the code as it was orignally released. How should I approach this? Should I isolate the code as much as possible and just retain the original license? Maybe put all the files that contain foreign code in their own folder and add a readme explaining what has been added/removed? There must have been tons of projects using other open source code. What is the standard approach to this?

    Read the article

  • Cumulative Feature Overviews For PeopleSoft 9.2 Now Available

    - by John Webb
    Cumulative Feature Overviews (aka CFO's), are a great tool to start your fit gap analysis for PeopleSoft 9.2.      Built into an Excel spreadsheet, it enables you to quickly understand major changes that have occurred across multiple releases for any give product.    For example, if you are on PeopleSoft Accounts Payable 8.9 and are looking for the changes that have occurred between 8.9 and 9.2, the CFO tool provides a list of these changes for all releases since PeopleSoft 8.9 with detailed descriptions.    Customers and partners can now download the 9.2 version of the CFO's in My Oracle Support at the link below. PeopleSoft Cumulative Feature Overview Tool Homepage [ID 1117033.1]

    Read the article

  • 32 bit ODBC drivers on 64 bit Windows

    Its been a while since I had to use an ODBC driver.  Today I learned That when you install a 32 bit ODBC driver on a 64 bit Windows but it doesnt show up in the Data Sources admin tool because this tool displays only 64 bit drivers. That you can manage a 32 bit ODBC driver on a 64 bit Windows using the 32 bit Data Sources admin tool located here: C:\Windows\SysWOW64\odbcad32.exe That 64 bit software cant use 32 bit ODBC drivers. That 32 bit software installed on a 64 bit Windows can use...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 32 bit ODBC drivers on 64 bit Windows

    - by guybarrette
    It’s been a while since I had to use an ODBC driver.  Today I learned… That when you install a 32 bit ODBC driver on a 64 bit Windows but it doesn’t show up in the Data Sources admin tool because this tool displays only 64 bit drivers. That you can manage a 32 bit ODBC driver on a 64 bit Windows using the 32 bit Data Sources admin tool located here: C:\Windows\SysWOW64\odbcad32.exe That 64 bit software can’t use 32 bit ODBC drivers. That 32 bit software installed on a 64 bit Windows can use 32 bit ODBC drivers. var addthis_pub="guybarrette";

    Read the article

  • Can I find out the number of searches on a given keyword, per state?

    - by Philippe
    I know that Google tells you how many times a certain keyword is used in a search. You can use the Google Keyword Tool for that. This tool also allows you to find out the number of "local" searches: this is the number of times a person from a given country searches for this keyword. My questions: can you also find out how many searches originate from a given American state ? In the Keyword Tool, I can only select countries, not states. Any other systems I can use to determine where people are searching for a given keyword?

    Read the article

  • how to write good programming logic?

    - by user106616
    recently I got job as a java developer, and now I have assigned project too. I want to know what is a good logic? when I check in the code my team lead is saying that its a good code. But when it comes to my project manager he is saying that its a bad code. And he is changing my code, after his changes if I see his code its really very very good and even simple. can you please tell me how to develop the good program, good logic? what is the best way to structure a problem in terms of code?

    Read the article

  • PostgreSQL data diff

    - by skanatek
    Note: this question is not about syncing database schema/structure Problem In my web application I have a PostgreSQL database server (PGS) and a (separate machine) business logic server (BLS) which regularly (every minute or two) queries 'SELECT ALL' against PGS. The problem is that the 'SELECT ALL' query can easily return 50-200 MB each time. It is obvious that it would be not so good architecture-wise to transfer so much data so frequently over the web. Possible solution What I would like to do is to run some diff tool on PGS and compare the new query with the previous query (all this should be done on PGS). Once the comparison is done I would like to get a dump from PGS and transfer it to BLS. I expect that a diff-based dump would be much, much smaller than the whole 'SELECT ALL' query. Question Is there any data diff tool for PostgreSQL that can do diffs that compare PostgreSQL data between 2 tables or 2 dumps? Note: I would prefer some open-source software tool.

    Read the article

  • Unit testing - getting started

    - by higgenkreuz
    I am just getting started with unit testing but I am not sure if I really understand the point of it all. I read tutorials and books on it all, but I just have two quick questions: I thought the purpose of unit testing is to test code we actually wrote. However, to me it seems that in order to just be able to run the test, we have to alter the original code, at which point we are not really testing the code we wrote but rather the code we wrote for testing. Most of our codes rely on external sources. Upon refactoring our code however, even it would break the original code, our tests still would run just fine, since the external sources are just muck-ups inside our test cases. Doesn't it defeat the purpose of unit testing? Sorry if I sound dumb here, but I thought someone could enlighten me a bit. Thanks in advance.

    Read the article

  • What are the pro/cons of Unity3D as a choice to make games ?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Good floor planner program?

    - by torbengb
    I'm planning to have a house built. I want to draw up some sketches on the computer, so I am looking for a program that can help me do this. It doesn't have to be a professional architecture tool; in fact that would be too complicated just yet (but maybe later on, for the detailed work...?). A simpler tool would be better. Features should include such things as ability to draw and move walls (not just using simple boxes), calculate room/floor area, add windows and doors, and the like. That's why Inkscape or OOo Presentation won't do. On Windows, a friend would just download a cracked professional architecture tool but that is not what I want to do. Suggestions?

    Read the article

  • Architecture for subscription based application

    - by John
    This is about the architecture of my application I think. I have a Rails application where companies can administrate all things related to clients. Companies can buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my appplication/service. Thing is, what should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision I am to make involves security (e.g. user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc) For sake of maintainability, I tend to opt for the one code base. For the database I really don't know at this moment. So what do you think is the best option?

    Read the article

  • Efficient inline templates and C++

    - by Darryl Gove
    I've talked before about calling inline templates from C++, I've also talked about calling inline templates efficiently. This time I want to talk about efficiently calling inline templates from C++. The obvious starting point is that I need to declare the inline templates as being extern "C": extern "C" { int mytemplate(int); } This enables us to call it, but the call may not be very efficient because the compiler will treat it as a function call, and may produce suboptimal code based on that premise. So we need to add the no_side_effect pragma: extern "C" { int mytemplate(int); #pragma no_side_effect(mytemplate) } However, this may still not produce optimal code. We've discussed how the no_side_effect pragma cannot be combined with exceptions, well we know that the code cannot produce exceptions, but the compiler doesn't know that. If we tell the compiler that information it may be able to produce even better code. We can do this by adding the "throw()" keyword to the template declaration: extern "C" { int mytemplate(int) throw(); #pragma no_side_effect(mytemplate) } The following is an example of how these changes might improve performance. We can take our previous example code and migrate it to C++, adding the use of a try...catch construct: #include <iostream extern "C" { int lzd(int); #pragma no_side_effect(lzd) } int a; int c=0; class myclass { int routine(); }; int myclass::routine() { try { for(a=0; a<1000; a++) { c=lzd(c); } } catch(...) { std::cout << "Something happened" << std::endl; } return 0; } Compiling this produces a slightly suboptimal code sequence in the hot loop: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 23 */ lzd %o0,%o0 /* 0x0018 21 */ add %l6,1,%l6 /* 0x001c */ cmp %l6,1000 /* 0x0020 */ bl,pt %icc,.L77000033 /* 0x0024 23 */ st %o0,[%l7] There's a store in the delay slot of the branch, so we're repeatedly storing data back to memory. If we change the function declaration to include "throw()", we get better code: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 21 */ add %i1,1,%i1 /* 0x0018 23 */ lzd %o0,%o0 /* 0x001c 21 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000019 /* 0x0024 */ nop The store has gone, but the code is still suboptimal - there's a nop in the delay slot rather than useful work. However, it's good enough for this example. The point I'm making is that the compiler produces the better code with both the "throw()" and the no side effect pragma.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >