Search Results

Search found 13222 results on 529 pages for 'security gate'.

Page 484/529 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • Recieving and organizing results without server side script (JavaScript)

    - by Aaron
    I have been working on a very large form project for the past few days. I finally managed to get tables to work properly within a javascript file that opens a new display window. Now the issue at hand is that I can't seem to get CSS code to work within the javascript that I have created. Before everyone starts thinking "just use server side script idiot" I have a few conditions and info about the file: The file is only being ran local due to confidential information risks. Once again no option for server access. The intranet the computers are on are already top security and this wouldn't exactly be a company wide program The code below is obviously just a demo with a simple form... The real file has six pages of highly confidential information Only certain fields on this form will actually be gathered (example: address doesnt appear in the results) The display page will contain data compiled into tables for easier viewing I need to be able to create css commands to easily detect certain information if it applies and along with matching design of the original form Here is the code: <html> <head> <title>Form Example</title> <script LANGUAGE="JavaScript" type="text/javascript"> function display() { DispWin = window.open('','NewWin', 'toolbar=no,status=no,width=800,height=600') message = "<body>"; message += "<table border=1 width=100%>"; message += "<tr>"; message += "<th colspan=2 align=center><font face=stencil color=black><h1>Results</h1><h4>one</h4></font>"; message += "</th>"; message += "</tr>"; message += "<td width=50% align=left>"; message += "<ul><li><b><font face=calibri color=red>NAME:</font></b> " + document.form1.yourname.value + "</UL>" message += "</td>"; message += "<td width=50% align=left>"; message += "<li><b>PHONE: </b>" + document.form1.phone.value + "</ul>"; message += "</td>"; message += "</table>"; message += "<body>"; DispWin.document.write(message); DispWin.document.body.style.cssText = 'color:#blue;'; } </script> </head> <body> <h1>Form Example</h1> Enter the following information: <form name="form1"> <p><b>Name:</b> <input TYPE="TEXT" SIZE="20" NAME="yourname"> </p> <p><b>Address:</b> <input TYPE="TEXT" SIZE="30" NAME="address"> </p> <p><b>Phone: </b> <input TYPE="TEXT" SIZE="15" NAME="phone"> </p> <p><input TYPE="BUTTON" VALUE="Display" onClick="display();"></p> </form> </body> </html> >

    Read the article

  • Snap App Windows to Pre-Defined Screen Sections with Acer GridVista

    - by Asian Angel
    The window snapping feature in Windows 7 and the ability to organize monitor(s) into specific gridded sections have both become popular lately. If you love the idea of having both combined in a single software then join us as we look at Acer GridVista. Note: Acer GridVista works with Windows XP, Vista, & 7. It will also work with dual monitors. Setup Acer GridVista comes in a zip file format and at first you might assume that it is portable in nature but it is not. Once you unzip the enclosed folder you will need to double click on “Setup.exe” to install the program. Acer GridVista in Action Once you have installed the program and started it up all that you will notice at first is the new “System Tray Icon”. Here you can see the “Context Menu”… The only menu command that you will likely use most of the time is the “Grid Configuration Command”. Notice that for our single monitor setup that it lists “Display 1”. The “Single Setting” is enabled by default and you can easily choose the layout that best suits your needs. The enabled layout style will always be highlighted in yellow for easy reference. For our example we chose the “Triple (primary at right)” layout style. Each section will be specifically numbered as shown here. Do not worry…the grid and numbers only appear for a moment and then become invisible again until you move an app window into that section/area of your screen. On every regular app window that you open you will notice three new buttons in the upper right corner. Here is what each of these new buttons do: Acer GridVista Extensions (Transparent, Send To Window Grid, About Acer GridVista): Viewable in a drop-down menu Lock To Grid (Enable/Disable): Enabled by default –> Note: Set to disable on a particular window to keep it free of the “grid locking function” Always On Top (Enable/Disable): Disabled by default A good look at the “Extensions Drop-Down Menu” where you can set an app window to be transparent or send it to a specific screen section on your monitor(s). If you open an app it will not automatically lock into a specific section. To lock the window into a specific section drag-and-drop the app window into the desired section. Notice the red outline and highlighted number on “Section 2” below. The red outline and highlighted number serves as an indicator that if you release the app window at that moment it will lock into the outlined/highlighted section. Now that Notepad is locked into “Section 2” you can see that it is maximized within that section. Continue to drag-and-drop your app windows into the appropriate sections as desired…apps can still be reduced to the “Taskbar” the same as before. Options These are the options available for Acer GridVista… Conclusion If you have been wanting the ability to “snap” windows and organize them into specific screen areas then Acer GridVista is definitely a program that you should try out. Links Download Acer GridVista at Softpedia View detailed information at the Acer GridVista Homepage Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapHelp Troubleshoot the Blue Screen of Death by Preventing Automatic RebootAdd Windows 7’s AeroSnap Feature to Vista and XPResize Windows to Specific Dimensions Easily With SizerKeyboard Ninja: Assign a Hotkey to any Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Playing Games In Chrome Made Easier Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • Welcome to BlogEngine.NET 2.9 using Microsoft SQL Server

    If you see this post it means that BlogEngine.NET 2.9 is running and the hard part of creating your own blog is done. There is only a few things left to do. Write Permissions To be able to log in to the blog and writing posts, you need to enable write permissions on the App_Data folder. If you’re blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support. You need write permissions on the App_Data folder because all posts, comments, and blog attachments are saved as XML files and placed in the App_Data folder.  If you wish to use a database to to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine wiki to get started. Security When you've got write permissions to the App_Data folder, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.  Passwords are hashed by default so if you lose your password, please see the BlogEngine wiki for information on recovery. Configuration and Profile Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET 2.9 is set up to take full advantage of of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this. Themes, Widgets & Extensions One last thing to consider is customizing the look of your blog.  We have a few themes available right out of the box including two fully setup to use our new widget framework.  The widget framework allows drop and drag placement on your side bar as well as editing and configuration right in the widget while you are logged in.  Extensions allow you to extend and customize the behaivor of your blog.  Be sure to check the BlogEngine.NET Gallery at dnbegallery.org as the go-to location for downloading widgets, themes and extensions. On the web You can find BlogEngine.NET on the official website. Here you'll find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.  Again, new themes, widgets and extensions can be downloaded at the BlogEngine.NET gallery. Good luck and happy writing. The BlogEngine.NET team

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • ??Data Guard???????Redo GAP

    - by JaneZhang(???)
      ?Oracle Data Guard?,Redo Gap??????????????????redo????????????,?????????redo??????????,?????????????:ARC:????MRP:Media Recovery Process,????????redoRFS:Remote File Server ,???????????redo??FAL:Fetch Archive Log????:?????????gap?,??????????gap?????:Oracle 11.2.0.2 on Linux 5.????:1.?????????????:Primary:MAX(SEQUENCE#)--------------           86Standby:MAX(SEQUENCE#)--------------           862. ??????,??gap:????????: #ifconfig eth0 down???????switch logfile:SQL>alter system switch logfile;SQL>alter system switch logfile;...Primary:MAX(SEQUENCE#)--------------           96????alert log?????????????:TNS-00513: Destination host unreachable   nt secondary err code: 101   nt OS err code: 0Error 12543 received logging on to the standbyFAL[server, ARCp]: Error 12543 creating remote archivelog file 'STANDBY'FAL[server, ARCp]: FAL archive failed, see trace file.ARCH: FAL archive failed. Archiver continuingORACLE Instance orcl - Archival Error. Archiver continuing.3.??????????????,????????????:mv *.arc ../4. ???????:#ifconfig eth0 up5.??,???ARC???????????????????MRP???gap??gap fetching.??alert log:Thu Mar 29 19:58:49 2012Media Recovery Waiting for thread 1 sequence 87 (in transit) <====  ?????,??87...Thu Mar 29 20:08:45 2012...Media Recovery Waiting for thread 1 sequence 94Thu Mar 29 20:11:01 2012RFS[61]: Assigned to RFS process 13643RFS[61]: Opened log for thread 1 sequence 97 dbid 1285401128 branch 757620395Archived Log entry 80 added for thread 1 sequence 97 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:11:02 2012RFS[62]: Assigned to RFS process 13645RFS[62]: Selected log 4 for thread 1 sequence 98 dbid 1285401128 branch 757620395Thu Mar 29 20:11:02 2012Primary database is in MAXIMUM PERFORMANCE modeRe-archiving standby log 4 thread 1 sequence 98Thu Mar 29 20:11:02 2012Archived Log entry 81 added for thread 1 sequence 98 ID 0x4c9d8928 dest 1:RFS[63]: Assigned to RFS process 13647RFS[63]: Selected log 4 for thread 1 sequence 99 dbid 1285401128 branch 757620395Thu Mar 29 20:11:05 2012Fetching gap sequence in thread 1, gap sequence 94-96 <===========?gap...6.??MRP?trace,?????MRP ??fetching gap:MRP trace:*** 2012-03-29 20:08:45.375 4265 krsh.cMedia Recovery Waiting for thread 1 sequence 94*** 2012-03-29 20:11:05.543*** 2012-03-29 20:11:05.543 4265 krsh.cFetching gap sequence in thread 1, gap sequence 94-96 <==========MRP?gap.Redo shipping client performing standby login*** 2012-03-29 20:11:05.593 4595 krsu.cLogged on to standby successfullyClient logon and security negotiation successful!7.????????????,???RFS????????, MRP ????????apply.Thu Mar 29 20:12:06 2012RFS[64]: Assigned to RFS process 13649RFS[64]: Opened log for thread 1 sequence 94 dbid 1285401128 branch 757620395Archived Log entry 82 added for thread 1 sequence 94 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:06 2012RFS[65]: Assigned to RFS process 13651RFS[65]: Opened log for thread 1 sequence 95 dbid 1285401128 branch 757620395Thu Mar 29 20:12:06 2012RFS[66]: Assigned to RFS process 13653RFS[66]: Opened log for thread 1 sequence 96 dbid 1285401128 branch 757620395Archived Log entry 83 added for thread 1 sequence 95 rlc 757620395 ID 0x4c9d8928 dest 2:Archived Log entry 84 added for thread 1 sequence 96 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:16 2012Media Recovery Log /home/oracle/arch1/standby/1_94_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_95_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_96_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_97_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_98_757620395.arc????:????????,????gap???,???ARC?????????gap??,????????????MRP???apply log??????gap,???????FAL????? ?:?11g,??????ARC??????RFS?MRP?????????????gap. 8. ????????MRP??FAL??gap??,????????????,??MRP?trace???:FAL[client, MRP0],?????FAL??? *** 2012-03-29 21:18:15.964 4265 krsh.cError 1031 received logging on to the standby*** 2012-03-29 21:18:15.964 4265 krsh.cFAL[client, MRP0]: Error 1031 connecting to PRIMARY for fetching gap sequence

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????????? ????????? ????????? ????·???? ?????:Oracle???????·??????(????)??:?????????????? Pickup!:IT???????????!|????????:100???????!|???????????? ????????:?Oracle DB?????????????????????Windows?VMware?? ?????:Oracle OpenWorld Tokyo 2012|JavaOne Tokyo 2012|??????:151 ?OTN ???? ?????? ????????:???????100???????! ?????????????????Amazon????Get! ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? ???????/???MySQL?????? ?????? ???????/???MySQL????? ?????????Oracle VM VirtualBox????Oracle Real Application Clusters (RAC) 11g Release 2????? ???????/??????????????????????????????? ???????/?????????????/????·????????Flashback Database with SSD? ????? ????? Oracle?????????????????????????!???????????? Oracle Enterprise Manager 12c(EM12c):????????? ~????????~ ???????????!??????·??????? Oracle Linux 5.8?????????? ???DB?????Oracle SQL Developer 3.1???????????? Oracle??????(OUI:Oracle Universal Installer)???? ????? ???? ????????? ????????????·???????? ????????????? ???????? 11gR2 ?????????? Oracle Database 11g R2 Oracle WebLogic Server 11g R1 Oracle Enterprise Manager 11g R1 ????? ????????? Oracle Database ????????????????·?? ???????/???????????????? ?????????(????????, ???, etc) ????????(???, ?????, REDO, ???·????, etc) ????·?????????????????(?????, ???, ??, ??, ??? etc) ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??|SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) Oracle Database ??????? Amazon EC2 Microsoft Excel Microsoft Visual Studio MSFC/MSCS(Microsoft Cluster Service) SAP ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard/Active Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) ????: Oracle Text Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Database Appliance Database Firewall Exadata Database Machine SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager for Database|for Middleware ????????????: Oracle Application Testing Suite Oracle Linux Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! ?????... ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN?????? ??????? ?????? ???????? ???? ???????

    Read the article

  • tightvnc authentication failure

    - by broiyan
    When I run a tightvnc client to establish a VNC session I sometimes receive an error message that suggests there are repeated failed VNC login attempts or a brute force attack. The message dialog title is "unsupported security type" and the text content is "too many authentication failures, try another connection? yes/no". This problem goes away if I reboot the Ubuntu server and reload the VNC server program and try again. From that point, it will work for multiple VNC sessions. My VNC sessions are typically about 20 minutes. At some time in the future, the problem may recur so it seems correlated to the time the server has been up or the time tightvnc has been loaded. Typically it takes only a day or so before the problem comes back. I am using tightvnc 1.3 on an server running Ubuntu 12.04. The version of vncserver is rather dated because that seems to be all that is available from tightvnc for linux servers. On the client side I use the newest Java-based VNC client (version 2.5) for both Windows access and Ubuntu access. All my VNC sessions are via SSH. I am the only user and I will typically use only the same client computer. How can I stop this problem from recurring? Edit I found the log file. This is a small excerpt of what I am seeing. Essentially, various IPs, not my own, are attempting to connect. What is the practical solution for this? 05/06/12 20:07:32 Got connection from client 69.194.204.90 05/06/12 20:07:32 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:07:32 Too many authentication failures - client rejected 05/06/12 20:07:32 Client 69.194.204.90 gone 05/06/12 20:07:32 Statistics: 05/06/12 20:07:32 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:24:56 Got connection from client 79.161.16.40 05/06/12 20:24:56 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:24:56 Too many authentication failures - client rejected 05/06/12 20:24:56 Client 79.161.16.40 gone 05/06/12 20:24:56 Statistics: 05/06/12 20:24:56 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:29:27 Got connection from client 109.230.246.54 05/06/12 20:29:27 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:29:28 rfbVncAuthProcessResponse: authentication failed from 109.230.246.54 05/06/12 20:29:28 Client 109.230.246.54 gone 05/06/12 20:29:28 Statistics: 05/06/12 20:29:28 framebuffer updates 0, rectangles 0, bytes 0

    Read the article

  • Group Policy error 1006 with and error code 52

    - by Bernesto
    I have a hyper-v cluster operating on win2k8 R2 in a 2003 forest. These servers are at our NOC with a DC that connects to our PDC at HQ via a persistent VPN. The cluster boxes are reporting a error event ID 1006 shown below. The DC is also reporting an error 5805 also shown below. I have found numorus posts regarding 1006 errors, but none for error ID 52's. It's weird, I can ping and I can browse network shares on the DC from each. I thought maybe a DNS or net work issue, but nslook up works too. Event 1006 <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-GroupPolicy" Guid="{AEA1B4FA-97D1-45F2-A64C-4D69FFFD92C9}" /> <EventID>1006</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>1</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2013-12-17T00:08:19.582292600Z" /> <EventRecordID>41808</EventRecordID> <Correlation ActivityID="{26B10592-6228-4A3E-845B-E04B49702A54}" /> <Execution ProcessID="964" ThreadID="1384" /> <Channel>System</Channel> <Computer>NEOREEFVH1.neoreef.com</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="SupportInfo1">1</Data> <Data Name="SupportInfo2">5012</Data> <Data Name="ProcessingMode">0</Data> <Data Name="ProcessingTimeInMilliseconds">1138</Data> <Data Name="ErrorCode">52</Data> <Data Name="ErrorDescription">Unavailable</Data> <Data Name="DCName" /> </EventData> </Event> Event 5805 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5805 Date: 12/16/2013 Time: 2:32:01 PM User: N/A Computer: NEOREEFSRV15 Description: The session setup from the computer NEOREEFVH3 failed to authenticate. The following error occurred: Access is denied. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 22 00 00 c0 "..À Here are the networks on the hosts: Any with a "Enabled" Are virtual switches.

    Read the article

  • apache eats up too much ram per child

    - by mrc4r7m4n
    Hello to everyone. I've got fallowing problem: Apache eat to many ram per child. The fallowing comments shows: cat /etc/redhat-release -- Fedora release 8 (Werewolf) free -m: total used free shared buffers cached Mem: 3566 3136 429 0 339 1907 -/+ buffers/cache: 889 2676 Swap: 4322 0 4322 I know that you will say that there is nothing to worry about because swap is not use, but i think it's not use for now. 3.httpd -v: Server version: Apache/2.2.14 (Unix) 4.httpd -l: Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_include.c mod_filter.c mod_log_config.c mod_env.c mod_setenvif.c mod_version.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c 5.List of loaded dynamic modules: LoadModule authz_host_module modules/mod_authz_host.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule cgi_module modules/mod_cgi.so 6.My prefrok directive <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 25 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 4000 </IfModule> KeepAliveTimeout 6 MaxKeepAliveRequests 100 KeepAlive On 7.top -u apache: ctrl+ M top - 09:19:42 up 2 days, 19 min, 2 users, load average: 0.85, 0.87, 0.80 Tasks: 113 total, 1 running, 112 sleeping, 0 stopped, 0 zombie Cpu(s): 7.3%us, 15.7%sy, 0.0%ni, 75.7%id, 0.0%wa, 0.7%hi, 0.7%si, 0.0%st Mem: 3652120k total, 3149964k used, 502156k free, 348048k buffers Swap: 4425896k total, 0k used, 4425896k free, 1944952k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16956 apache 20 0 700m 135m 100m S 0.0 3.8 2:16.78 httpd 16953 apache 20 0 565m 130m 96m S 0.0 3.7 1:57.26 httpd 16957 apache 20 0 587m 129m 102m S 0.0 3.6 1:47.41 httpd 16955 apache 20 0 567m 126m 93m S 0.0 3.6 1:43.60 httpd 17494 apache 20 0 626m 125m 96m S 0.0 3.5 1:58.77 httpd 17515 apache 20 0 540m 120m 88m S 0.0 3.4 1:45.57 httpd 17516 apache 20 0 573m 120m 88m S 0.0 3.4 1:50.51 httpd 16954 apache 20 0 551m 120m 88m S 0.0 3.4 1:52.47 httpd 17493 apache 20 0 586m 120m 94m S 0.0 3.4 1:51.02 httpd 17279 apache 20 0 568m 117m 87m S 16.0 3.3 1:51.87 httpd 17302 apache 20 0 560m 116m 90m S 0.3 3.3 1:59.06 httpd 17495 apache 20 0 551m 116m 89m S 0.0 3.3 1:47.51 httpd 17277 apache 20 0 476m 114m 81m S 0.0 3.2 1:37.14 httpd 30097 apache 20 0 536m 113m 83m S 0.0 3.2 1:47.38 httpd 30112 apache 20 0 530m 112m 81m S 0.0 3.2 1:40.15 httpd 17513 apache 20 0 516m 112m 85m S 0.0 3.1 1:43.92 httpd 16958 apache 20 0 554m 111m 82m S 0.0 3.1 1:44.18 httpd 1617 apache 20 0 487m 111m 85m S 0.0 3.1 1:31.67 httpd 16952 apache 20 0 461m 107m 75m S 0.0 3.0 1:13.71 httpd 16951 apache 20 0 462m 103m 76m S 0.0 2.9 1:28.05 httpd 17278 apache 20 0 497m 103m 76m S 0.0 2.9 1:31.25 httpd 17403 apache 20 0 537m 102m 79m S 0.0 2.9 1:52.24 httpd 25081 apache 20 0 412m 101m 70m S 0.0 2.8 1:01.74 httpd I guess thats all information needed to help me solve this problem. I think the virt memory is to big, the same res. The consumption of ram is increasing all the time. Maybe it's memory leak because i see there is so many static modules compiled. Could someone help me with this issue? Thank you in advance. 8.ldd /usr/sbin/httpd linux-gate.so.1 => (0x0012d000) libm.so.6 => /lib/libm.so.6 (0x0012e000) libpcre.so.0 => /lib/libpcre.so.0 (0x00157000) libselinux.so.1 => /lib/libselinux.so.1 (0x0017f000) libaprutil-1.so.0 => /usr/lib/libaprutil-1.so.0 (0x0019a000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x001b4000) libldap-2.3.so.0 => /usr/lib/libldap-2.3.so.0 (0x001e6000) liblber-2.3.so.0 => /usr/lib/liblber-2.3.so.0 (0x00220000) libdb-4.6.so => /lib/libdb-4.6.so (0x0022e000) libexpat.so.1 => /lib/libexpat.so.1 (0x00370000) libapr-1.so.0 => /usr/lib/libapr-1.so.0 (0x00391000) libpthread.so.0 => /lib/libpthread.so.0 (0x003b9000) libdl.so.2 => /lib/libdl.so.2 (0x003d2000) libc.so.6 => /lib/libc.so.6 (0x003d7000) /lib/ld-linux.so.2 (0x00110000) libuuid.so.1 => /lib/libuuid.so.1 (0x00530000) libresolv.so.2 => /lib/libresolv.so.2 (0x00534000) libsasl2.so.2 => /usr/lib/libsasl2.so.2 (0x00548000) libssl.so.6 => /lib/libssl.so.6 (0x00561000) libcrypto.so.6 => /lib/libcrypto.so.6 (0x005a6000) libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x006d9000) libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00707000) libcom_err.so.2 => /lib/libcom_err.so.2 (0x0079a000) libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x0079d000) libz.so.1 => /lib/libz.so.1 (0x007c3000) libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x007d6000) libkeyutils.so.1 => /lib/libkeyutils.so.1 (0x007df000) Currently i cant restart the apache. I work in a company and now there is rush hours. I will do that about 5 pm. Current top -u apache: shift + M top - 12:31:33 up 2 days, 3:30, 1 user, load average: 0.73, 0.80, 0.79 Tasks: 114 total, 1 running, 113 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 4.7%sy, 0.0%ni, 90.0%id, 1.3%wa, 0.3%hi, 0.3%si, 0.0%st Mem: 3652120k total, 3169720k used, 482400k free, 353372k buffers Swap: 4425896k total, 0k used, 4425896k free, 1978688k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16957 apache 20 0 708m 145m 117m S 0.0 4.1 2:11.32 httpd 16956 apache 20 0 754m 142m 107m S 0.0 4.0 2:33.94 httpd 16955 apache 20 0 641m 136m 103m S 5.3 3.8 1:58.37 httpd 17515 apache 20 0 624m 131m 99m S 0.0 3.7 2:03.90 httpd 16954 apache 20 0 627m 130m 98m S 0.0 3.6 2:13.87 httpd 17302 apache 20 0 625m 124m 97m S 0.0 3.5 2:10.80 httpd 17403 apache 20 0 624m 114m 91m S 0.0 3.2 2:08.85 httpd 16952 apache 20 0 502m 114m 81m S 0.0 3.2 1:23.78 httpd 16186 apache 20 0 138m 61m 35m S 0.0 1.7 0:15.54 httpd 16169 apache 20 0 111m 49m 17m S 0.0 1.4 0:06.00 httpd 16190 apache 20 0 126m 48m 24m S 0.0 1.4 0:11.44 httpd 16191 apache 20 0 109m 48m 19m S 0.0 1.4 0:04.62 httpd 16163 apache 20 0 114m 48m 21m S 0.0 1.4 0:09.60 httpd 16183 apache 20 0 127m 48m 23m S 0.0 1.3 0:11.23 httpd 16189 apache 20 0 109m 47m 17m S 0.0 1.3 0:04.55 httpd 16201 apache 20 0 106m 47m 17m S 0.0 1.3 0:03.90 httpd 16193 apache 20 0 103m 46m 20m S 0.0 1.3 0:10.76 httpd 16188 apache 20 0 107m 45m 18m S 0.0 1.3 0:04.85 httpd 16168 apache 20 0 103m 44m 17m S 0.0 1.2 0:05.61 httpd 16187 apache 20 0 118m 41m 21m S 0.0 1.2 0:08.50 httpd 16184 apache 20 0 111m 41m 19m S 0.0 1.2 0:09.28 httpd 16206 apache 20 0 110m 41m 20m S 0.0 1.2 0:11.69 httpd 16199 apache 20 0 108m 40m 17m S 0.0 1.1 0:07.76 httpd 16166 apache 20 0 104m 37m 18m S 0.0 1.0 0:04.31 httpd 16185 apache 20 0 99.3m 36m 16m S 0.0 1.0 0:04.16 httpd as you can see the memory usage growing up from e.g. res( 135 to 145)m and it will be growing up till memory ends. Are you sure that this option i set up: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 25 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 4000 </IfModule> KeepAliveTimeout 6 MaxKeepAliveRequests 100 KeepAlive On are correct? Maybe i should decrease some of them? Another questions that bother me: I got e.g. static module mod_negotiation.c compiled into apache and the same module loaded as dynamic. Is this normal that i've loaded duplicated module. But when i want to remove dynamic module(mod_negotiation.c) from httpd.conf and then restart apache error appears. Now I cant tell this error message because i cant restart apache :( Hello again:) This is memory usage just after restart apache: top - 16:19:12 up 2 days, 7:18, 3 users, load average: 1.08, 0.91, 0.91 Tasks: 109 total, 2 running, 107 sleeping, 0 stopped, 0 zombie Cpu(s): 17.0%us, 25.7%sy, 51.0%ni, 4.7%id, 0.0%wa, 0.3%hi, 1.3%si, 0.0%st Mem: 3652120k total, 2762516k used, 889604k free, 361552k buffers Swap: 4425896k total, 0k used, 4425896k free, 2020980k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13569 apache 20 0 93416 43m 15m S 0.0 1.2 0:02.55 httpd 13575 apache 20 0 98356 38m 16m S 32.3 1.1 0:02.55 httpd 13571 apache 20 0 86808 33m 12m S 0.0 0.9 0:02.60 httpd 13568 apache 20 0 86760 33m 12m S 0.0 0.9 0:00.81 httpd 13570 apache 20 0 83480 33m 12m S 0.0 0.9 0:00.51 httpd 13572 apache 20 0 63520 5916 1548 S 0.0 0.2 0:00.02 httpd 13573 apache 20 0 63520 5916 1548 S 0.0 0.2 0:00.02 httpd 13574 apache 20 0 63520 5916 1548 S 0.0 0.2 0:00.02 httpd 13761 apache 20 0 63388 5128 860 S 0.0 0.1 0:00.01 httpd 13762 apache 20 0 63388 5128 860 S 0.0 0.1 0:00.01 httpd 13763 apache 20 0 63388 5128 860 S 0.0 0.1 0:00.00 httpd I will try to compile apache from source to newest version. Thx for help guys.

    Read the article

  • "The system time has changed" events after waking from sleep

    - by Damir Arh
    Sometimes when my computer running Windows 7 wakes up from sleep, it has to adjust the time. When this happens the following system event is logged: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'> <System> <Provider Name='Microsoft-Windows-Kernel-General' Guid='{A68CA8B7-004F-D7B6-A698-07E2DE0F1F5D}'/> <EventID>1</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000010</Keywords> <TimeCreated SystemTime='2010-03-06T19:09:57.500000000Z'/> <EventRecordID>10672</EventRecordID> <Correlation/> <Execution ProcessID='4' ThreadID='56'/> <Channel>System</Channel> <Computer>GAME</Computer> <Security/> </System> <EventData> <Data Name='NewTime'>2010-03-06T19:09:57.500000000Z</Data> <Data Name='OldTime'>2010-03-06T17:34:32.870117200Z</Data> </EventData> <RenderingInfo Culture='sl-SI'> <Message>The system time has changed to ?2010?-?03?-?06T19:09:57.500000000Z from ?2010?-?03?-?06T17:34:32.870117200Z.</Message> <Level>Information</Level> <Task></Task> <Opcode>Info</Opcode> <Channel>System</Channel> <Provider>Microsoft-Windows-Kernel-General</Provider> <Keywords> <Keyword>Time</Keyword> </Keywords> </RenderingInfo> </Event> When this happens (I noticed it twice until now) the old time always corresponds to the time when computer entered sleep. The problem is that if Windows Media Center is scheduled for recording during this time, it just skips it as if the computer was turned off. I never had this problem running Windows Vista on the same machine. Any ideas what could be causing this problem and how to solve it are welcome.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >