Search Results

Search found 9467 results on 379 pages for 'tutor flowchart policy policy and procedure manual procedure'.

Page 145/379 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Best Free software for hosting user guides

    - by Hippyjim
    Hi All After having to clean up spam from a MediaWiki install for the umpteenth time, despite a recaptcha plugin "preventing" automated signups, I'm wondering if MediaWiki is the right choice as a CMS for hosting user manuals and guides. I've always loved the way wikis can let the guides be edited and commented on collaboratively, but I'm getting tired of dealing with automated vandals. I've disabled edits & signups for now, but as I'm having to go through the pain of cleaning thousands of junk pages, I'm beginning to think I should cut my losses and look for a better alternative. Does anyone know of suitable a FOSS application (preferably PHP / MySQL based) that would be simple for a non-coder (our manual writer) to edit, but that has all the interconnectivity, and searchability of a wiki? Or should I just bite the bullet again and lock the wiki down even further?

    Read the article

  • how to set owner and permission to a cryptsetup made device?

    - by Antoine Rodriguez
    I have an encrypted loopback volume. I need to mount and umount manually the volume so I use cryptsetup luksOpen and cryptsetup luksClose . However, When I invoke this command it pops up the /dev/mapper device under all the sessions under gnome/xfce/kde/unity ... And then it let the user to mount (with password), expulse and unmount the volume. It's quite annoying in a multi user server (you are working on your files and the volume is being unmounted). How can I define ownership and permission on the device ? I've tried chown and chmod approach witch gives nothing. Cryptsetup doesn't have any options that let you do that. crypttab auto mount the filesystem on boot witch is unwanted (only manual mount)

    Read the article

  • Oracle Solaris 11.1 Security Lab

    - by user12608073
    Recently I developed a set of lab exercises for an Oracle OpenWorld Hands On Lab, entitled HOL10201, Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications. This explored the new Extended Policy for privilege assignments in Oracle Solaris 11.1.  Today, Oracle Solaris 11.1 has been officially released via the Package Repository. Today's release and branch are numbered 0.5.11-0.175.1.0.0.24.2, which means it is based on build 24b of 11.1 which is, in turn, based on build 175a of 11.0.  There is a good summary of new features available here: Oracle Solaris 11.1 - What's New . Pages 5 thru 7 give an overview of some of the new security enhancements. There is much more information available in the newly published documentation for Oracle Solaris 11.1. I plan to explore some of these enhancements in a series of blog entries. Meanwhile, I've published a copy of the lab materials, which you can try out with this new release.

    Read the article

  • Tor check failed though Vidalia shows green onion

    - by Wolter Hellmund
    I have installed tor successfully and Vidalia shows it is running without problems; however, when I check if I am using tor in this website I get an error message saying I am not using tor. I have tried two things to fix this: I installed ProxySwitchy on Google Chrome, and created a profile for Tor (with address 127.0.0.1, port 8118), but enabling the proxy doesn't change the results in the tor check website linked before. I changed my network proxy settings through System Settings Network from None to Manual, and selected as address always 127.0.0.1 and as port 8118 for all but for the socket, for which I entered 9050 instead. This makes internet stop working completely. How can I fix this problem?

    Read the article

  • In the Groove: PASS Board Year 1, Q3

    - by Denise McInerney
    It's nine months into my first year on the PASS Board and I feel like I've found my rhythm. I've accomplished one of the goals I set out for the year and have made progress on others. Here's a recap of the last few months. Anti-Harassment Policy & Process Completed In April I began work on a Code of Conduct for the PASS Summit. The Board had several good discussions and various PASS members provided feedback. You can read more about that in this blog post. Since the document was focused on issues of harassment we renamed it the "Anti-Harassment Policy " and it was approved by the Board in August. The next step was to refine the guideliness and process for enforcement of the AHP. A subcommittee worked on this and presented an update to the Board at the September meeting. You can read more about that in this post, and you can find the process document here. Global Growth Expanding PASS' reach and making the organization relevant to SQL Server communities around the world has been a focus of the Board's work in 2012. We took the Global Growth initiative out to the community for feedback, and everyone on the Board participated, via Twitter chats, Town Hall meetings, feedback forums and in-person discussions. This community participation helped shape and refine our plans. Implementing the vision for Global Growth goes across all portfolios. The Virtual Chapters are well-positioned to help the organization move forward in this area. One outcome of the Global Growth discussions with the community is the expansion of two of the VCs from country-specific to language-specific. Thanks to the leadership in Brazil & Mexico for taking the lead here. I look forward to continued success for the Portuguese- and Spanish-language Virtual Chapters. Together with the Global Chinese VC PASS is off to a good start in making the VC's truly global. Virtual Chapters The VCs continue to grow and expand. Volunteers recently rebooted the Azure and Virutalization VCs, and a new  Education VC will be launching soon. Every week VCs offer excellent free training on a variety of topics. It's the dedication of the VC leaders and volunteers that make all this possible and I thank them for it. Board meeting The Board had an in-person meeting in September in San Diego, CA.. As usual we covered a number of topics including governance changes to support Global Growth, the upcoming Summit, 2013 events and the (then) upcoming PASS election. Next Up Much of the last couple of months has been focused on preparing for the PASS Summit in Seattle Nov. 6-9. I'll be there all week;  feel free to stop me if you have a question or concern, or just to introduce yourself.  Here are some of the places you can find me: VC Leaders Meeting Tuesday 8:00 am the VC leaders will have a meeting. We'll review some of the year's highlights and talk about plans for the next year Welcome Reception The VCs will be at the Welcome Reception in the new VC Lounge. Come by, learn more about what the VCs have to offer and meet others who share your interests. Exceptional DBA Awards Party I'm looking forward to seeing PASS Women in Tech VC leader Meredith Ryan receive her award at this event sponsored by Red Gate Session Presentation I will be presenting a spotlight session entitled "Stop Bad Data in Its OLTP Tracks" on Wednesday at 3:00 p.m. Exhibitor Reception This reception Wednesday evening in the Expo Hall is a great opportunity to learn more about tools and solutions that can help you in your job. Women in Tech Luncheon This year marks the 10th WIT Luncheon at PASS. I'm honored to be on the panel with Stefanie Higgins, Kevin Kline, Kendra Little and Jen Stirrup. This event is on Thursday at 11:30. Community Appreciation Party Thursday evening don't miss this event thanking all of you for everthing you do for PASS and the community. This year we will be at the Experience Music Project and it promises to be a fun party. Board Q & A Friday  9:45-11:15  am the members of the Board will be available to answer your questions. If you have a question for us, or want to hear what other members are thinking about, come by room 401 Friday morning.

    Read the article

  • Java Cryptography Extension

    - by Adam Tannon
    I was told that in order to support AES256 encryption inside my Java app that I would need the JCE with Unlimited Strength Jurisdiction Policy Files. I downloaded this from Oracle and unzipped it and I'm only seeing 2 JARs: local_policy.jar; and US_export_polic.jar I just want to confirm I'm not missing anything here! My understanding (after reading the README.txt) is that I just drop these two into my <JAVA_HOME>/lib/security/ directory and they should be installed. By the names of these JARs I have to assume that its not the Java Crypto API that cannot handle AES256, but it's in fact a legal issue, perhaps? And that these two JARs basically tell the JRE "yes, it's legally-acceptable to run this level of crypto (AES256)." Am I correct or off-base?

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • Installing a downgraded version of Firefox 16 from PPA

    - by Mikko Ohtamaa
    I'd like to fetch and install old FF16 instead of FF17 on Ubuntu 10.04 LTS server. Currently FF17 is default. FF17 is incompatible with Selenium 2.26 http://stackoverflow.com/questions/13600247/unable-to-run-selenium-suite-on-firefox-17 How one can install an old version of Firefox with apt-get? Can one pindown this version so that it is not automatically updated? Also if there exists a static FF16 installation available it is a solution. apt-cache policy firefox firefox: Installed: 17.0.1+build1-0ubuntu0.10.04.1 Candidate: 17.0.1+build1-0ubuntu0.10.04.1 Version table: *** 17.0.1+build1-0ubuntu0.10.04.1 0 500 http://dk.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 100 /var/lib/dpkg/status 3.6.3+nobinonly-0ubuntu4 0 500 http://dk.archive.ubuntu.com/ubuntu/ lucid/main Packages

    Read the article

  • Fan running constantly on a Dell D420 laptop

    - by Halik
    I'm running latest Ubuntu 12.04 beta on a Dell D420 laptop. The fan is spinning almost constantly - even after turning the PC off, letting it cool down, and then turning it back on, the fan will turn on after some idle time. Not to mention any CPU work. The CPU temps are normal, in range of 50-55 idle, and up to 70 with some load. It wouldn't be an issue, but the same PC running Fedora, or Arch Linux, had a much more modest fan profile - the temps were managed while you seldom heard the fan. To counter the problem, I currently use i8k tools, set manual temperature thresholds which seemed to have worked, but the i8kmon has a tendency to cycle the fan between lower and higher state within aboutin a second intervals - which is extremely annoying. As far as I can tell I did not run any special software (beside laptop-mode-tools), or any additional kernel modules when running Arch Linux and I can't tell about Fedora.

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • A Plea for Doug

    - by user12652314
    Doug was a key leader in the JCP and did all his research on sparc/solaris. That is until we changed the free patch policy support academics & research post CIC and he and many left in droves entirely pissed off. Well, we're working on a fix now so that all faculty can set-up a server environment, get free patch support and innovate on our stack from OS to virtualization to toolsets in support research, academic use and teaching. Hopefully, just maybe, we can start to bring Doug and the others back home as a result.

    Read the article

  • Why is it impossible to produce truly random numbers?

    - by Vinoth Kumar
    I was trying to solve a hobby problem that required generating a million random numbers. But I quickly realized, it is becoming difficult to make them unique. I picked up Algorithm Design Manual to read about random number generation. It has the following paragraph that I am fully not able to understand. Unfortunately, generating random numbers looks a lot easier than it really is. Indeed, it is fundamentally impossible to produce truly random numbers on any deterministic device. Von Neumann [Neu63] said it best: “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” The best we can hope for are pseudo-random numbers, a stream of numbers that appear as if they were generated randomly. Why is it impossible to produce truly random numbers in any deterministic device? What does this sentence mean?

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Welcome

    - by Jiandong Guo
    In this blog, I plan to provide you with information about OWSM, Oracle Web services Manager.  I joined Platform Security and OWSM team in Oracle's identity management organization in February, 2010. Before that I had been working on Metro, an open source Web services project,  in Sun's Glassfish organization for 5 years, as one of the architects for security. I am continuing that work here at Oracle OWSM, focusing on developing and evangelizing our enterprise Web services security,  identity and policy management offerings.To start with, I plan to write a series of posts on some of the new features for OWSM in Oracle Fusion Middleware 11g R1 PS3.Thank you all for your interests.

    Read the article

  • Partner Webcast - Upgrade Oracle Forms to 11g version

    - by Dmitry Nefedkin
    Description Oracle Forms, a component of Oracle Fusion Middleware, is Oracle's long-established technology to design and build enterprise applications quickly and efficiently. The latest Oracle Forms release is 11g, and it’s the only release supported right now. During this webinar we are going to explore the architecture and the new features of Oracle Forms 11g, describe the steps required to upgrade from the previous Oracle Forms releases.  If you’re still on Forms 10g or earlier you should not miss a chance to join this webinar.  Agenda Oracle Forms releases/support/license policy Oracle Forms 11g architecture Oracle Forms 11g new features Oracle Forms 11gR2 upgrade steps Installation of the Forms 11g environment Configuring the environment Upgrading Forms Modules Useful links Content The recording is available, use the following link. Or you can just watch the slides below. Upgrade Oracle Forms to 11g View more presentations from Oracle ISV Migration Center.

    Read the article

  • Internet Explorer 10 Release Preview now available for Windows 7 SP1!

    - by KeithMayer
    This week, the IE team released IE 10 Release Preview for Windows 7 SP1 and Windows Server 2008 R2 SP1!  You can download IE10 Release Preview for evaluation and testing (remember, it's still pre-release software) from the following link location ... Download IE10 Release Preview: http://windows.microsoft.com/en-US/internet-explorer/downloads/ie-10/worldwide-languages You can get at overview of What's New in Internet Explorer 10 at: Internet Explorer 10 FAQ for IT Pros Of course, you can also get the full release of IE10 by downloading Windows 8 at http://aka.ms/dlw8rtm What's Next? After downloading IE10 Release Preview, begin setting up your lab environment to plan for how you'll customize and deploy IE10 in your environment when it's released with these resources: IE10 Customization and Administration Internet Explorer Administration Kit (IEAK) 10 Group Policy Settings Reference Hope this helps! Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • An Approach to Incremental Conversion

    - by Paula Speranza-Hadley
    It is common for Oracle Enterprise Taxation and Policy Management (ETPM) customers to implement in multiple phases.  This results in a need for incremental conversion, where part of the data in is production and they are now adding new data.  Some of the new data can be new persons, accounts and their children, but some may be new tax types for existing taxpayers.  This document addresses a methodology for adding incremental data into ETPM.  It does not address every possible data scenario, but offers a path to achieving incremental conversion without the need for code changes.    https://blogs.oracle.com/tax/resource/IncrementalConversion.pdf  

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Improved Database Threat Management with Oracle Audit Vault and ArcSight ESM

    - by roxana.bradescu
    Data represents one of the most valuable assets in any organization, making databases the primary target of today's attacks. It is important that organizations adopt a database security defense-in-depth approach that includes data encryption and masking, access control for privileged users and applications, activity monitoring and auditing. With Oracle Audit Vault, organizations can reliably monitor database activity enterprise-wide and alert on any security policy exceptions. The new integration between Oracle Audit Vault and ArcSight Enterprise Security Manager, allows organizations to take advantage of enterprise-wide, real-time event aggregation, correlation and response to attacks against their databases. Join us for this live SANS Tool Talk event to learn more about this new joint solution and real-world attack scenarios that can now be quickly detected and thwarted.

    Read the article

  • data maintenance/migrations in image based sytems

    - by User
    Web applications usually have a database. The code and the database work hand in hand together. Therefore Frameworks like Ruby on Rails and Django create migration files Sure there are also servers written in Self or Smalltalk or other image-based systems that face the same problem: Code is not written on the server but in a separate image of the programmer. How do these systems deal with a changing schema, changing classes/prototypes. Which way do the migrations go? Example: What is the process of a new attribute going from programmer's idea to the server code and all objects? I found the Gemstone/S manual chapter 8 but it does not really talk about the process of shipping code to the server.

    Read the article

  • Help needed in customizing Ubuntu 13 distribution

    - by Bilal Wajid
    I have been using Ubuntu for over 3 years. I want to custom-build-ubuntu-distribution. I am trying to do the needful using Ubuntu 13.04. I have tried the following:- Remastersys - failed. Relinux - failed. Ubuntu Customization Kit - failed. LiveCDCustomizationFromScratch (help.ubuntu.com) - failed. Kindly guide me how to do this, based on the above failed attempts. I think a manual of how to do it would be very very helpful. Thanks and Best Wishes, B. Wajid

    Read the article

  • Accessibility bus warning when opening files in Eclipse from command line (Ubuntu 13.10)

    - by Reese
    Similar to closed issue Gnome Menu Broken? When opening a file from the command line for edits in Eclipse , I get this warning: ** (eclipse:nnnn): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. The 4-digit number at (eclipse:nnnn) changes each time I issue an 'eclipse some/file.ext' command. The file opens but the warning is an annoyance that shouldn't be happening, it may be indicative of some other problem. Updated Ubuntu 13.10 64-bit, updated Eclipse Luna.

    Read the article

  • Gracefully terminate a request based service on server

    - by Jatin
    In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user. But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here? And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >