Search Results

Search found 9299 results on 372 pages for 'policy and procedure manual'.

Page 136/372 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Manually closing a port from commandline

    - by codingfreak
    I want to close an open port which is in listening mode between my client and server application. Is there any manual command line option in Linux to close a port ?? NOTE: I came to know that "only the application which owns the connected socket should close it, which will happen when the application terminates." I dont understand why it is only possible by the application which opens it ... But still eager to know if there is any another way to do it ??

    Read the article

  • Svn hook on commit makes lock

    - by dynback.com
    My post-commit hook looks like this: pushd C:\Websites\Project svn update I am updating my server copy of repository. When I commit client stopped on sending content and locked or I dont know. Its is waiting for something. So when I cancel and try to update manually on server, I see: Working copy "." lockedsvn And only after manual cleanup and update again, I get updated revision, that was really commited. What I do wrong?

    Read the article

  • Easily view a list of changes of upgraded packages.

    - by D Connors
    So, let's say I run sudo apt-get upgrade on my Lucid Lynx and it upgrades a couple of packages I'm interested in. Is there a command to run that will open some kind of info or manual that tells me what changes were made in this new version of the package? For instance, if run the apt-get upgrade and it installs a new version of empathy. Do I have to go over to their site to review the changes made in this version, or is there a quicker command line way?

    Read the article

  • iptables management utility (character based): Suggestions?

    - by samsmith
    I need IPTables , like everyone else, but I don't use it enough to really keep my knowledge complete and fresh. The setup utility in centos is too basic (doesn't seem to allow me to open up custom ports, just standard ones). Suggestions of a tool or utility to simply iptables management? I looked over APF (http://www.rfxn.com/projects/advanced-policy-firewall/). Is that something that is useful? thanks!

    Read the article

  • How do I upgrade mongodb 1.8 to 2.2 on ubuntu?

    - by Alex Waters
    Following this guide: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ I ended up with mongodb 18 on my Ubuntu 10.04. I've just read the 2.2 mongodb release notes on upgrading and it says to just replace the binary. Would that be /usr/lib/mongodb/mongod ? It looks like the 2.2 tar has several files I might need to copy over: mongo, mongod, mongoexport, mongodump, mongofiles, mongoimport, mongorestore, mongos, xulwrapper Can I just copy and paste all of those over to replace the old versions of those files?

    Read the article

  • Console program settings

    - by sahana
    I have a generalized program to be run both in windows and linux. When I run the program in the windows console it halts at a point where the .sh file is to be executed with a window popping asking for "which program to use" and requires manual intervention to cancel. My question is: How do I change the setting available in the console that will let not the execution to halt when an unknown file extension is encountered?

    Read the article

  • Can I automatically map chrome's bookmarks bar to its jump list?

    - by Alex Nye
    I would like the contents of my bookmarks bar to be present in my Google Chrome jump list, without the manual tedium of managing both the bar's organization and contents and those of the jump list. If it's possible to automatically manage jump lists in such a way as to make this possible, I'd be delighted. I don't think I'm quite ready to attempt programming an extension thus myself. edit: it appears this is not possible. I have submitted the feature as a request to the chrome team.

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • ??OPEN CURSOR?BULK COLLECT

    - by Liu Maclean(???)
    ????T.askmaclean.com?????bulk collect?open cursor???, ?????????  ??????: ???? OPEN_CURSOR ????SQL?? ???????. ?????? ????? ???????????????? ????? test_soruce create table zengfankun_temp01 as select * from dba_objects;select count(*) from zengfankun_temp01;–12,6826analyze table zengfankun_temp01 compute statistics; create or replace procedure test_open_cursor istype type_owner is table of zengfankun_temp01.owner%type index by binary_integer;type type_object_name is table of zengfankun_temp01.object_name%type index by binary_integer;type type_object_id is table of zengfankun_temp01.object_id%type index by binary_integer;type type_object_type is table of zengfankun_temp01.object_type%type index by binary_integer;type type_last_ddl_time is table of zengfankun_temp01.last_ddl_time%type index by binary_integer; l_ary_owner type_owner;l_ary_object_name type_object_name;l_ary_object_id type_object_id;l_ary_object_type type_object_type;l_ary_last_ddl_time type_last_ddl_time; cursor cur_object isselect owner,object_name,object_id,object_type,last_ddl_timefrom zengfankun_temp01order by owner,object_name,object_type,last_ddl_time;OPEN_START number;OPEN_END number;FETCH_START number;FETCH_END number;beginDBMS_OUTPUT.ENABLE (buffer_size=>null) ;OPEN_START:=dbms_utility.get_time();open cur_object;OPEN_END :=dbms_utility.get_time();dbms_output.put_line(‘OPEN_TIME:’||TO_CHAR(OPEN_END-OPEN_START));loopFETCH_START:=dbms_utility.get_time();fetch cur_object bulk collect intol_ary_owner,l_ary_object_name,l_ary_object_id,l_ary_object_type,l_ary_last_ddl_timelimit 10000;FETCH_END:=dbms_utility.get_time();dbms_output.put_line(‘FETCH_TIME:’||TO_CHAR(FETCH_END-FETCH_START)||’ ROWCOUNT:’||cur_object%rowCount); exit when cur_object%notfound or cur_object%notfound is null;end loop;end test_open_cursor; OPEN_TIME:12FETCH_TIME:21 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:1 ROWCOUNT:126826 ???? OPEN_TIME:0FETCH_TIME:18 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:2 ROWCOUNT:126826 SQL?????????, ????????????.??OPEN CURSOR ????0???????????3??.??N? ??????. ???? ?N? ?????????? ??????. ??????????????? ??????????. ?????????10000??? ???????????????????clear???, ???????????: ?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? ????FETCH ??????????????,???????Current Block, The most recent version of block , ?????SCN >> Snapshot scn, ????UNDO???? ???SCN ???Best Block ,???Read Consistentcy;???? ???UNDO SNAPSHOT???????????????Best Block??,???????ORA-1555??? ????????, ??????????,???????????????char(2000)????, ???????????????,????bulk collect fetch??fetch 10 ???,????????OPEN CURSOR?????PARSE??SQL????????, ??????????fetch bulk collect??????????10????,??”_trace_pin_time”????Server Process?pin CR block???,??????????Fetch Bulk Collect limit 10??10?buffer?pin? [oracle@nas ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 1 11:36:52 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- http://www.askmaclean.com SQL> create table maclean (t1 char(2000)) tablespace users pctfree 99; Table created. SQL> begin 2 for i in 1..200 loop 3 insert into maclean values('MACLEAN'); 4 commit ; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> select count(*) from maclean; COUNT(*) ---------- 200 SQL> select blocks,num_rows from dba_tables where table_name='MACLEAN'; BLOCKS NUM_ROWS ---------- ---------- 244 200 SQL> alter system set "_trace_pin_time"=1 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> alter session set events '10046 trace name context forever,level 12'; Session altered. SQL> SQL> SQL> declare 2 cursor v_cursor is 3 select * from sys.maclean; 4 type v_type is table of sys.maclean%rowtype index by binary_integer; 5 rec_tab v_type; 6 begin 7 open v_cursor; 8 dbms_lock.sleep(30); 9 loop 10 fetch v_cursor bulk collect 11 into rec_tab limit 10; 12 dbms_lock.sleep(10); 13 exit when v_cursor%notfound; 14 end loop; 15 end; 16 / ?????10046 trace+ pin trace: PARSING IN CURSOR #47499559136872 len=337 dep=0 uid=0 oct=47 lid=0 tim=1343836146412056 hv=496860239 ad='11a11dbb0' sqlid='4zh7954ftuz2g' declare cursor v_cursor is select * from sys.maclean; type v_type is table of sys.maclean%rowtype index by binary_integer; rec_tab v_type; begin open v_cursor; dbms_lock.sleep(30); loop fetch v_cursor bulk collect into rec_tab limit 10; dbms_lock.sleep(10); exit when v_cursor%notfound; end loop; end; END OF STMT PARSE #47499559136872:c=0,e=346,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1343836146412051 ===================== PARSING IN CURSOR #47499559126280 len=25 dep=1 uid=0 oct=3 lid=0 tim=1343836146414939 hv=3296884535 ad='11a11d250' sqlid='2mb1493284xtr' SELECT * FROM SYS.MACLEAN END OF STMT PARSE #47499559126280:c=1999,e=2427,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=2568761675,tim=1343836146414937 EXEC #47499559126280:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2568761675,tim=1343836146415104 ????? ? SELECT * FROM SYS.MACLEAN? PARSE ????? , ????FETCH???????pin ????????, ????OPEN CURSOR????? *** 2012-08-01 11:49:36.424 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 30009361 duration=0 p2=0 p3=0 obj#=-1 tim=1343836176424782 ???30s pin ktewh26: kteinpscan dba 0x10a6202:4 time 1039048805 pin ktewh27: kteinmap dba 0x10a6202:4 time 1039048847 pin kdswh11: kdst_fetch dba 0x10a6203:1 time 1039048898 pin kdswh11: kdst_fetch dba 0x10a6204:1 time 1039048961 pin kdswh11: kdst_fetch dba 0x10a6205:1 time 1039049004 pin kdswh11: kdst_fetch dba 0x10a6206:1 time 1039049042 pin kdswh11: kdst_fetch dba 0x10a6207:1 time 1039049089 pin kdswh11: kdst_fetch dba 0x10a6208:1 time 1039049123 pin kdswh11: kdst_fetch dba 0x10a6209:1 time 1039049159 pin kdswh11: kdst_fetch dba 0x10a620a:1 time 1039049191 pin kdswh11: kdst_fetch dba 0x10a620b:1 time 1039049225 pin kdswh11: kdst_fetch dba 0x10a620c:1 time 1039049260 kdst_fetch???fetch??????? , ??fetch?10?? ???????FETCH FETCH #47499559126280:c=0,e=536,p=0,cr=12,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836176425542 *** 2012-08-01 11:49:46.428 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002694 duration=0 p2=0 p3=0 obj#=-1 tim=134383618642829 ????10s pin kdswh11: kdst_fetch dba 0x10a620d:1 time 1049052211 pin kdswh11: kdst_fetch dba 0x10a620e:1 time 1049052264 pin kdswh11: kdst_fetch dba 0x10a620f:1 time 1049052299 pin kdswh11: kdst_fetch dba 0x10a6211:1 time 1049052332 pin kdswh11: kdst_fetch dba 0x10a6212:1 time 1049052364 pin kdswh11: kdst_fetch dba 0x10a6213:1 time 1049052398 pin kdswh11: kdst_fetch dba 0x10a6214:1 time 1049052430 pin kdswh11: kdst_fetch dba 0x10a6215:1 time 1049052462 pin kdswh11: kdst_fetch dba 0x10a6216:1 time 1049052494 pin kdswh11: kdst_fetch dba 0x10a6217:1 time 1049052525 FETCH #47499559126280:c=0,e=371,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836186428807 ??pin 10????, ???fetch ?? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002864 duration=0 p2=0 p3=0 obj#=-1 tim=1343836196431754 pin kdswh11: kdst_fetch dba 0x10a6218:1 time 1059055662 pin kdswh11: kdst_fetch dba 0x10a6219:1 time 1059055714 pin kdswh11: kdst_fetch dba 0x10a621a:1 time 1059055748 pin kdswh11: kdst_fetch dba 0x10a621b:1 time 1059055781 pin kdswh11: kdst_fetch dba 0x10a621c:1 time 1059055815 pin kdswh11: kdst_fetch dba 0x10a621d:1 time 1059055848 pin kdswh11: kdst_fetch dba 0x10a621e:1 time 1059055883 pin kdswh11: kdst_fetch dba 0x10a621f:1 time 1059055915 pin kdswh11: kdst_fetch dba 0x10a6221:1 time 1059055953 pin kdswh11: kdst_fetch dba 0x10a6222:1 time 1059055992 FETCH #47499559126280:c=0,e=385,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836196432274 ???? ??????? DBA????? ............................ ???? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002933 duration=0 p2=0 p3=0 obj#=-1 tim=1343836366495589 pin kdswh11: kdst_fetch dba 0x10a62f6:1 time 1229119497 pin kdswh11: kdst_fetch dba 0x10a62f7:1 time 1229119545 pin kdswh11: kdst_fetch dba 0x10a62f8:1 time 1229119576 pin kdswh11: kdst_fetch dba 0x10a62f9:1 time 1229119610 pin kdswh11: kdst_fetch dba 0x10a62fa:1 time 1229119644 pin kdswh11: kdst_fetch dba 0x10a62fb:1 time 1229119671 pin kdswh11: kdst_fetch dba 0x10a62fc:1 time 1229119703 pin kdswh11: kdst_fetch dba 0x10a62fd:1 time 1229119730 pin kdswh11: kdst_fetch dba 0x10a62fe:1 time 1229119760 pin kdswh11: kdst_fetch dba 0x10a62ff:1 time 1229119787 FETCH #47499559126280:c=0,e=340,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836366496067 ??????DBA? 0x10a6203 , ??DBA ? 0x10a62ff ???????DBA??MACLEAN????????,???DBA???Maclean????? getbfno?????dba??????????? CREATE OR REPLACE FUNCTION getbfno (p_dba IN VARCHAR2) RETURN VARCHAR2 IS l_str VARCHAR2 (255) DEFAULT NULL; l_fno VARCHAR2 (15); l_bno VARCHAR2 (15); BEGIN l_fno := DBMS_UTILITY.data_block_address_file (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_bno := DBMS_UTILITY.data_block_address_block (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_str := 'datafile# is:' || l_fno || CHR (10) || 'datablock is:' || l_bno || CHR (10) || 'dump command:alter system dump datafile ' || l_fno || ' block ' || l_bno || ';'; RETURN l_str; END; / Function created. SQL> select getbfno('0x10a6203') from dual; GETBFNO('0X10A6203') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680451 dump command:alter system dump datafile 4 block 680451; SQL> select getbfno('0x10a62ff') from dual; GETBFNO('0X10A62FF') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680703 dump command:alter system dump datafile 4 block 680703; SQL> select dbms_rowid.rowid_block_number(min(rowid)),dbms_rowid.rowid_relative_fno(min(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MIN(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MIN(ROWID)) ----------------------------------------- 680451 4 SQL> select dbms_rowid.rowid_block_number(max(rowid)),dbms_rowid.rowid_relative_fno(max(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MAX(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MAX(ROWID)) ----------------------------------------- 680703 4 ???????3???: 1.?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? 2.????FETCH ?????????????? 3. ???open cursor+ fetch bulk collect???”?????????”

    Read the article

  • A class meant for an alfresco behavior and its bean, how do they work and how are they deployed trough eclipse

    - by MrHappy
    (This is a partial repost of a question asked 10 days ago because only 1 part was answered(not included), I've rewritten it into a way better question and added 3 more tags) where do I put the DeleteAsset.class or why isn't it being found? I've put the compiled class from the bin of the workspace of eclipse into alfresco-4.2.c/tomcat/webapps/alfresco/WEB-INF/classes/com/openerp/behavior/ and right now it's giving me Error loading class [com.openerp.behavior.DeleteAsset] for bean with name 'deletionBehavior' defined in URL [file:/home/openerp/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension/cust??om-web-context.xml]: problem with class file or dependent class; nested exception is java.lang.NoClassDefFoundError: com/openerp/behavior/DeleteAsset (wrong name: DeleteAsset) when I put it in there. (See bean below!) The code(I'd trying to work without the model class, idk if I made any silly mistakes on that): package com.openerp.behavior; import java.util.List; import java.net.*; import java.io.*; import org.alfresco.repo.node.NodeServicePolicies; import org.alfresco.repo.policy.Behaviour; import org.alfresco.repo.policy.JavaBehaviour; import org.alfresco.repo.policy.PolicyComponent; import org.alfresco.repo.policy.Behaviour.NotificationFrequency; import org.alfresco.repo.security.authentication.AuthenticationUtil; import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork; import org.alfresco.service.cmr.repository.ChildAssociationRef; import org.alfresco.service.cmr.repository.NodeRef; import org.alfresco.service.cmr.repository.NodeService; import org.alfresco.service.namespace.NamespaceService; import org.alfresco.service.namespace.QName; import org.alfresco.service.transaction.TransactionService; import org.apache.log4j.Logger; //this is the newer version //import com.openerp.model.openerpJavaModel; public class DeleteAsset implements NodeServicePolicies.BeforeDeleteNodePolicy { private PolicyComponent policyComponent; private Behaviour beforeDeleteNode; private NodeService nodeService; public void init() { this.beforeDeleteNode = new JavaBehaviour(this,"beforeDeleteNode",NotificationFrequency.EVERY_EVENT); this.policyComponent.bindClassBehaviour(QName.createQName("http://www.someco.com/model/content/1.0","beforeDeleteNode"), QName.createQName("http://www.someco.com/model/content/1.0","sc:doc"), this.beforeDeleteNode); } public setNodeService(NodeService nodeService){ this.nodeService = nodeService; } @Override public void beforeDeleteNode(NodeRef node) { System.out.println("beforeDeleteNode!"); try { QName attachmentID1= QName.createQName("http://www.someco.com/model/content/1.0", "OpenERPattachmentID1"); // this could/shoul be defined in your OpenERPModel-class int attachmentid = (Integer)nodeService.getProperty(node, attachmentID1); //int attachmentid = 123; URL oracle = new URL("http://0.0.0.0:1885/delete/%20?attachmentid=" + attachmentid); URLConnection yc = oracle.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader( yc.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) //System.out.println(inputLine); in.close(); } catch(Exception e) { e.printStackTrace(); } } } This is my full custom-web-context file: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' 'http://www.springframework.org/dtd/spring-beans.dtd'> <beans> <!-- Registration of new models --> <bean id="smartsolution.dictionaryBootstrap" parent="dictionaryModelBootstrap" depends-on="dictionaryBootstrap"> <property name="models"> <list> <value>alfresco/extension/scOpenERPModel.xml</value> </list> </property> </bean> <!-- deletion of attachments within openERP when delete is initiated in Alfresco--> <bean id="DeleteAsset" class="com.openerp.behavior.DeleteAsset" init-method="init"> <property name="NodeService"> <ref bean="NodeService" /> </property> <property name="PolicyComponent"> <ref bean="PolicyComponent" /> </property> </bean> and content type: <type name="sc:doc"> <title>OpenERP Document</title> <parent>cm:content</parent> There's also this when I open share An error has occured in the Share component: /share/service/components/dashlets/my-sites. It responded with a status of 500 - Internal Error. Error Code Information: 500 - An error inside the HTTP server which prevented it from fulfilling the request. Error Message: 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 Server: Alfresco Spring WebScripts - v1.2.0 (Release 1207) schema 1,000 Time: Oct 23, 2013 11:40:06 AM Click here to view full technical information on the error. Exception: org.alfresco.error.AlfrescoRuntimeException - 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:108) org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:59) org.alfresco.web.scripts.ImapServerStatus.getEnabled(ImapServerStatus.java:49) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.mozilla.javascript.MemberBox.invoke(MemberBox.java:155) org.mozilla.javascript.JavaMembers.get(JavaMembers.java:117) org.mozilla.javascript.NativeJavaObject.get(NativeJavaObject.java:113) org.mozilla.javascript.ScriptableObject.getProperty(ScriptableObject.java:1544) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1375) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1364) org.mozilla.javascript.gen.c6._c1(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:4) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.optimizer.OptRuntime.callName0(OptRuntime.java:108) org.mozilla.javascript.gen.c6._c0(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:51) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:393) org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:2834) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.gen.c6.exec(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:318) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:192) org.springframework.extensions.webscripts.AbstractWebScript.executeScript(AbstractWebScript.java:1305) org.springframework.extensions.webscripts.DeclarativeWebScript.execute(DeclarativeWebScript.java:86) org.springframework.extensions.webscripts.PresentationContainer.executeScript(PresentationContainer.java:70) org.springframework.extensions.webscripts.LocalWebScriptRuntimeContainer.executeScript(LocalWebScriptRuntimeContainer.java:240) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:377) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:209) org.springframework.extensions.webscripts.WebScriptProcessor.executeBody(WebScriptProcessor.java:310) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.process(RenderService.java:599) org.springframework.extensions.surf.render.RenderService.renderSubComponent(RenderService.java:505) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1284) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.RenderService.processComponent(RenderService.java:432) org.springframework.extensions.surf.render.bean.ComponentRenderer.body(ComponentRenderer.java:94) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderComponent(RenderService.java:961) org.springframework.extensions.surf.render.RenderService.renderRegionComponents(RenderService.java:900) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1263) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.bean.RegionRenderer.body(RegionRenderer.java:99) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderRegion(RenderService.java:851) org.springframework.extensions.directives.RegionDirectiveData.render(RegionDirectiveData.java:91) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment$3.render(Environment.java:246) org.springframework.extensions.surf.extensibility.impl.DefaultExtensibilityDirectiveData.render(DefaultExtensibilityDirectiveData.java:119) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.visit(Environment.java:406) freemarker.core.BodyInstruction.accept(BodyInstruction.java:93) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processTemplate(RenderService.java:721) org.springframework.extensions.surf.render.bean.TemplateInstanceRenderer.body(TemplateInstanceRenderer.java:140) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.PageRenderer.body(PageRenderer.java:85) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderPage(RenderService.java:762) org.springframework.extensions.surf.mvc.PageView.dispatchPage(PageView.java:411) org.springframework.extensions.surf.mvc.PageView.renderView(PageView.java:306) org.springframework.extensions.surf.mvc.AbstractWebFrameworkView.renderMergedOutputModel(AbstractWebFrameworkView.java:316) org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.MTAuthenticationFilter.doFilter(MTAuthenticationFilter.java:74) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:374) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585) org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:1771) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:724) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:324) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:200) UPDATE: I think I've found the problem. Being a newbie to eclipse I haven't managed the dependecies well I think. Could anyone link me to a tutorial describing how to get org.alfresco.repo.node.NodeServicePolicies; as seen in import org.alfresco.repo.node.NodeServicePolicies; and other such imports into eclipse, I've got the alfresco source from svn but the tutorial I've found seems to fail me. java/lang/Error\00\F1Unresolved compilation problems: The declared package "com.openerp.behavior" does not match the expected package "java.com.openerp.behavior" The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.apache cannot be resolved The import com.openerp cannot be resolved NodeServicePolicies cannot be resolved to a type PolicyComponent cannot be resolved to a type Behaviour cannot be resolved to a type NodeService cannot be resolved to a type Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type Return type for the method is missing NodeService cannot be resolved to a type NodeService cannot be resolved to a type NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type \00\00\00\00\00(Ljava/lang/String;)V\00LineNumberTable\00LocalVariableTable\00this\00'Ljava/com/openerp/behavior/DeleteAsset;\00init\008Unresolved compilation problems: Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type \00(LNodeRef;)V\00\00\B0Unresolved compilation problems: NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type

    Read the article

  • Using gnome-boxes with Remote Desktop Connection

    - by lviggiani
    in the past I used to connect to my windows server from ubuntu with remmina (through rdp protocol). I've recently installed ubuntu gnome remix (12.10.1) and gnome-boxes and I'm wondering if I can do the same with gnome-boxes. I've tried creating a new item in gnome-boxes and entering as url "rdp://x.x.x.x" as I used to do in remmina bit it says that the rdp protocol is not supported. Unfortunatelly I cannot find any manual or documentation listing the supported protocol or giving instruction on how to connect to a remote desktop. Can someone help me please?

    Read the article

  • Add Intellisense when using the URL rewritingnet config file

    - by Vizioz Limited
    I often use the URL re-writing engine that comes with Umbraco which is from urlrewriting.net and I have always found it very fiddly to edit the configuration file, I wish I have know it was possible to add Intellisense to Visual Studio, and I guessed that most people would also not realise this, after all, who reads the manual right?!So, if you are someone who edits the urlrewriting.config without Intellisense, but would like to use it, this is how you do it :)1) Download the URL rewriting source code files from urlrewriting.net2) Unzip the source files and find the urlwritingnet.xsd file, put this file into your web project, or the directory where your urlredirect.config lives.3) Open up the web project and then open your config file, and hey presto! You should find you now have intellisense!So, the next question is, are there XSD files for the rest of the Umbraco config files, and more importantly for the Umbraco.xml file? If not, does anyone fancy creating them? I am sure intellisense for all these files would be very helpful :)

    Read the article

  • Security in OBIEE 11g, Part 2

    - by Rob Reynolds
    Continuing the series on OBIEE 11g, our guest blogger this week is Pravin Janardanam. Here is Part 2 of his overview of Security in OBIEE 11g. OBIEE 11g Security Overview, Part 2 by Pravin Janardanam In my previous blog on Security, I discussed the OBIEE 11g changes regarding Authentication mechanism, RPD protection and encryption. This blog will include a discussion about OBIEE 11g Authorization and other Security aspects. Authorization: Authorization in 10g was achieved using a combination of Users, Groups and association of privileges and object permissions to users and Groups. Two keys changes to Authorization in OBIEE 11g are: Application Roles Policies / Permission Groups Application Roles are introduced in OBIEE 11g. An application role is specific to the application. They can be mapped to other application roles defined in the same application scope and also to enterprise users or groups, and they are used in authorization decisions. Application roles in 11g take the place of Groups in 10g within OBIEE application. In OBIEE 10g, any changes to corporate LDAP groups require a corresponding change to Groups and their permission assignment. In OBIEE 11g, Application roles provide insulation between permission definitions and corporate LDAP Groups. Permissions are defined at Application Role level and changes to LDAP groups just require a reassignment of the Group to the Application Roles. Permissions and privileges are assigned to Application Roles and users in OBIEE 11g compared to Groups and Users in 10g. The diagram below shows the relationship between users, groups and application roles. Note that the Groups shown in the diagram refer to LDAP Groups (WebLogic Groups by default) and not OBIEE application Groups. The following screenshot compares the permission windows from Admin tool in 10g vs 11g. Note that the Groups in the OBIEE 10g are replaced with Application Roles in OBIEE 11g. The same is applicable to OBIEE web catalog objects.    The default Application Roles available after OBIEE 11g installation are BIAdministrator, BISystem, BIConsumer and BIAuthor. Application policies are the authorization policies that an application relies upon for controlling access to its resources. An Application Role is defined by the Application Policy. The following screenshot shows the policies defined for BIAdministrator and BISystem Roles. Note that the permission for impersonation is granted to BISystem Role. In OBIEE 10g, the permission to manage repositories and Impersonation were assigned to “Administrators” group with no control to separate these permissions in the Administrators group. Hence user “Administrator” also had the permission to impersonate. In OBI11g, BIAdministrator does not have the permission to impersonate. This gives more flexibility to have multiple users perform different administrative functions. Application Roles, Policies, association of Policies to application roles and association of users and groups to application roles are managed using Fusion Middleware Enterprise Manager (FMW EM). They reside in the policy store, identified by the system-jazn-data.xml file. The screenshots below show where they are created and managed in FMW EM. The following screenshot shows the assignment of WebLogic Groups to Application Roles. The following screenshot shows the assignment of Permissions to Application Roles (Application Policies). Note: Object level permission association to Applications Roles resides in the RPD for repository objects. Permissions and Privilege for web catalog objects resides in the OBIEE Web Catalog. Wherever Groups were used in the web catalog and RPD has been replaced with Application roles in OBIEE 11g. Following are the tools used in OBIEE 11g Security Administration: ·       Users and Groups are managed in Oracle WebLogic Administration console (by default). If WebLogic is integrated with other LDAP products, then Users and Groups needs to managed using the interface provide by the respective LDAP vendor – New in OBIEE 11g ·       Application Roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control – New in OBIEE 11g ·       Repository object permissions are managed in OBIEE Administration tool – Same as 10g but the assignment is to Application Roles instead of Groups ·       Presentation Services Catalog Permissions and Privileges are managed in OBI Application administration page - Same as 10g but the assignment is to Application Roles instead of Groups Credential Store: Credential Store is a single consolidated service provider to store and manage the application credentials securely. The credential store contains credentials that either user supplied or system generated. Credential store in OBIEE 10g is file based and is managed using cryptotools utility. In 11g, Credential store can be managed directly from the FMW Enterprise Manager and is stored in cwallet.sso file. By default, the Credential Store stores password for deployed RPDs, BI Publisher data sources and BISystem user. In addition, Credential store can be LDAP based but only Oracle Internet Directory is supported right now. As you can see OBIEE security is integrated with Oracle Fusion Middleware security architecture. This provides a common security framework for all components of Business Intelligence and Fusion Middleware applications.

    Read the article

  • Batch Best Practices and Technical Best Practices Updated

    - by ACShorten
    The Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) and Technical Best Practices for Oracle Utilities Application Framework Based Products (Doc Id: 560367.1) have been updated with updated and new advice for the various versions of the Oracle Utilities Application Framework based products. These documents cover the following products: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Utilities Meter Data Management (V2 and above) Oracle Utilities Mobile Workforce Management (V2 and above) Oracle Utilities Smart Grid Gateway (V2 and above) – All editions Oracle Enterprise Taxation Management (all versions) Oracle Enterprise Taxation and Policy Management (all versions) Whilst there is new advice, some of which has been posted on this blog, a lot of sections have been updated for advice based upon feedback from customers, partners, consultants, our development teams and our hard working Support personnel. All whitepapers are available from My Oracle Support.

    Read the article

  • Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases

    - by Steven Chan
    The EBS 12.1.3 Release Content Document (RCD, Note 561580.1) summarizes the latest functional and technology stack-related updates in a specific release.  The E-Business Suite Electronic Technical Reference Manual (eTRM) summarizes the database objects in a specific EBS release.  Those are useful references, but sometimes you need to find out which database objects have changed between one EBS release and another.  This kind of information about the differences or deltas between two releases is useful if you have customized or extended your EBS instance and plan to upgrade to EBS 12.1.3. Where can you find that information?Answering that question has just gotten a lot easier.  You can now use a new EBS Data Model Comparison Report tool:EBS Data Model Comparison Report Overview (Note 1290886.1)This new tool lists the database object definition changes between the following source and target EBS releases:EBS 11.5.10.2 and EBS 12.1.3EBS 12.0.4 and EBS 12.1.3EBS 12.1.1 and EBS 12.1.3EBS 12.1.2 and EBS 12.1.3For example, here's part of the report comparing Bill of Materials changes between 11.5.10.2 and 12.1.3:

    Read the article

  • mysql not starting

    - by Eiriks
    I have a server running on rackspace.com, it been running for about a year (collecting data for a project) and no problems. Now it seems mysql froze (could not connect either through ssh command line, remote app (sequel pro) or web (pages using the db just froze). I got a bit eager to fix this quick and rebooted the virtual server, running ubuntu 10.10. It is a small virtual LAMP server (10gig storage - I'm only using 1, 256mb ram -has not been a problem). Now after the reboot, I cannot get mysql to start again. service mysql status mysql stop/waiting I believe this just means mysql is not running. How do I get this running again? service mysql start start: Job failed to start No. Just typing 'mysql' gives: mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) There is a .sock file in this folder, 'ls -l' gives: srwxrwxrwx 1 mysql mysql 0 2012-12-01 17:20 mysqld.sock From googleing this for a while now, I see that many talk about the logfile and my.cnf. Logs Not sure witch ones I should look at. This log-file is empty: 'var/log/mysql/error.log', so is the 'var/log/mysql.err' and 'var/log/mysql.log'. my.cnf is located in '/etc/mysql' and looks like this. Can't see anything clearly wrong with it either. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ I need the data in the database (so i'd like to avoid reinstalling), and I need it back up running again. All hint, tips and solutions are welcomed and appreciated.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • SQL SERVER – Auto Complete and Format T-SQL Code – Devart SQL Complete

    - by pinaldave
    Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. If we consider the history of the world, if we all wanted to keep traditional practices, we would have never invented the wheel.  But as time progressed, people wanted convenience and efficiency, which then led to laziness. Wanting a more efficient way to do something is not inherently lazy.  That’s how I see any efficiency tools. A few days ago I found Devart SQL Complete.  It took less than a minute to install, and after installation it just worked without needing any tweaking.  Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste.  You can start typing right away, and it will complete keywords, object names, and fragmentations. It completes statement expressions.  How many times do we write insert, update, delete?  Take this example: to alter a stored procedure name, we don’t remember the code written in it, you have to write it over again, or go back to SQL Server Studio Manager to create and alter which is very difficult.  With SQL Complete , you can write “alter stored procedure,” and it will finish it for you, and you can modify as needed. I love to write code, and I love well-written code.  When I am working with clients, and I find people whose code have not been written properly, I feel a little uncomfortable.  It is difficult to deal with code that is in the wrong case, with no line breaks, no white spaces, improper indents, and no text wrapping.  The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents.  SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter tool will make them better, not lazier. SQL Management Studio gives information about your code when you hover your mouse over it, however SQL Complete goes further in it, going into the work table, and the current rate idea, too. It gives you more information about the parameters; and last but not least, it will just take you to the help file of code navigation.  It will open object explorer in a document viewer.  You can start going through the various properties of your code – a very important thing to do. Here are are interesting Intellisense examples: 1) We are often very lazy to expand *however, when we are using SQL Complete we can just mouse over the * and it will give us all the the column names and we can select the appropriate columns. 2) We can put the cursor after * and it will give us option to expand it to all the column names by pressing the Tab key. 3) Here is one more Intellisense feature I really liked it. I always alias my tables and I always select the alias with special logic. When I was using SQL Complete I selected just a tablename (without schema name) and…(just like below image) … and it autocompleted the schema and alias name (the way I needed it). I believe using SQL Complete we can work faster.  It supports all versions of SQL Server, and works SQL formatting.  Many businesses perform code review and have code standards, so why not use an efficiency tool on everyone’s computer and make sure the code is written correctly from the first time?  If you’re interested in this tool, there are free editions available.  If you like it, you can buy it.  I bought it because it works.  I love it, and I want to hear all your opinions on it, too. You can get the product for FREE.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Where are some good resources to learn Game Development with OpenGL ES 2.X

    - by Mahbubur R Aaman
    Background: From http://www.khronos.org/opengles/2_X/ OpenGL ES 2.0 combines a version of the OpenGL Shading Language for programming vertex and fragment shaders that has been adapted for embedded platforms, together with a streamlined API from OpenGL ES 1.1 that has removed any fixed functionality that can be easily replaced by shader programs, to minimize the cost and power consumption of advanced programmable graphics subsystems. Related Resources The OpenGL ES 2.0 specification, header files, and optional extension specifications The OpenGL ES 2.0 Online Manual Pages The OpenGL ES 3.0 Shading LanguageOnline Reference Pages The OpenGL ES 2.0 Quick Reference Card OpenGL ES 1.X OpenGL ES 2.0 From http://www.cocos2d-iphone.org/archives/2003 Cocos2d Version 2 released and one of primary key point noted as OpenGL ES 2.0 support From http://www.h-online.com/open/news/item/Compiz-now-supports-OpenGL-ES-2-0-1674605.html Compiz now supports OpenGL ES 2.0 My Question : Being as a Game Developer ( I have to work with several game engine Cocos2d, Unity). I need several resources to cope up with OpenGL ES 2.X for better outcome while developing games?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >