Search Results

Search found 15914 results on 637 pages for 'physical security'.

Page 554/637 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

  • What Counts for a DBA: Humility

    - by drsql
    In football (the American sort, naturally,) there are a select group of players who really hope to never have their names called during the game. They are members of the offensive line, and their job is to protect other players so they can deliver the ball to the goal to score points. When you do hear their name called, it is usually because they made a mistake and the player that they were supposed to protect ended up flat on his back admiring the clouds in the sky instead of advancing towards the goal to scoring point. Even on the rare occasion their name is called for a good reason, it is usually because they were making up for a teammate who had made a mistake and they covered up for them. The role of offensive lineman is a very good analogy for the role of the admin DBA. As a DBA, you are called on to be barely visible and rarely heard, protecting the company data assets tenaciously, even though the enemies to our craft surround us on all sides:. Developers: Cries of ‘foul!’ often ensue when the DBA says that they want data integrity to be stringently enforced and that documentation is needed so they can support systems, mostly because every error occurrence in the enterprise will be initially blamed on the database and fall to the DBA to troubleshoot. Insisting too loudly may bring those cries of ‘foul’ that somewhat remind you of when your 2 year old daughter didn't want to go to bed. The result of this petulance is that the next "enemy" gets involved. Managers: The concerns that motivate DBAs to argue will not excite the kind of manager who gets his technical knowledge from a glossy magazine filled with buzzwords, charts, and pretty pictures. However, the other programmers in the organization will tickle the buzzword void with a stream of new-sounding ideas and technologies constantly, along with warnings that if we did care about data integrity and document things, the budget would explode! In contrast, the arguments for integrity of data and supportability tend to be about as exciting as watching grass grow, and far too many manager types seem to prefer to smoke it than watch it. Packaged Applications: The DBA is rarely given a chance to review a new application that is being demonstrated for the enterprise, and rarer still is the DBA that gets a veto of an application because the database it uses has clearly been created by an architect that won't read a data modeling book because he is already married. More often than not this leads to hours of work for the DBA trying to performance-tune a database with a menagerie of rules that must be followed to stay within the  application support agreement, such as no changing indexes on a third party schema even though there are 10 billion rows instead of the 10 thousand when the system was last optimized. Hardware Failures: Physical disks, networking devices, memory, and backup devices all come with a measure known as ‘mean time before failure’ and it is never listed in centuries or eons. More like years, and the term ‘mean’ indicates that half of the devices are expected to fail before that, which by my calendar means any hour of any day that it wants to fail it will. But the DBA sucks it up and does the task at hand with a humility that makes them nearly invisible to all but the most observant person in the organization. The best DBAs I know are so proactive in their relentless pursuit of perfection that they detect many of the bugs (which they seldom caused) in the system well before they become a problem. In the end the DBA gets noticed for one of same two reasons as the offensive lineman. You make a mistake, like dropping a critical production database that had never been backed up; or when a system crashes for any reason whatsoever and they are on the spot with troubleshooting and system restoration plans that have been well thought out, tested, and tested again. Not because there is any glory in it, but because it is what they do.   Note: The characteristics of the professions referred to in this blog are meant to be overstated stereotypes for humorous effect, and even some DBAs aren't quite this perfect. If you are reading this far and haven’t hand written a 10 page flaming comment about how you are a _______ and you aren’t like this, that is awesome. Not every situation applies to everyone, but if you have never worked with a bad packaged app, a magazine trained manager, programmers that aren’t team players, or hardware that occasionally failed, relax and go have a unicorn sandwich before you wake up.

    Read the article

  • WEBLOGIC 12C HANDS-ON BOOTCAMP

    - by agallego
      Oracle PartnerNetwork | Account | Feedback   JOIN THE ORACLE WEBLOGIC PARTNER COMMUNITY AND ATTEND A WEBLOGIC 12C HANDS-ON BOOTCAMP Dear partner As a valued partner we would like to invite you for the WebLogic Partner Community and our WebLogic 12c hands-on Bootcamps – free of charge! Please first login at http://partner.oracle.com and then visit: WebLogic Partner Community. (If you need support with your account please contact the Oracle Partner Business Center). The goal of the WebLogic Partner Community is to provide you with the latest information on Oracle's offerings and to facilitate the exchange of experience within community members. Register Now FREE Assessment vouchers to become certified and WebLogic Server 12c 200 new Features and Training Connect and Network   WebLogic Blogs   WebLogic on Facebook   WebLogic on LinkedIn   WebLogic on Twitter   WebLogic on Oracle Mix WebLogic 12c hands-on Workshops We offer free3 days hands-on WebLogic 12c workshops for Oracle partners who want to become Application Grid Specialized: Register Here! Country Date Location Registration   Germany  3-5 April 2012 Oracle Düsseldorf Click here   France  24-26 April 2012 Oracle Colombes Click here   Spain 08-10 May 2012  Oracle Madrid  Click here   Netherlands  22-24 May 2012  Oracle Amsterdam  Click here   United Kingdom  06-08 June 2012  Oracle Reading  Click here   Italy  19-21 June 2012  Oracle Cinisello Balsamo  Click here   Portugal  10-12 July 2012  Oracle Lisbon  Click here Skill requirements Attendees need to have the following skills as this is required by the product-set and to make sure they get the most out of the training: Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Member of WebLogic Partner Community for registration please vist http://www.oracle.com/partners/goto/wls-emea Hardware requirements Every participant works on his own notebook. The minimal hardware requirements are: 4Gb physical RAM (we will boot the image with 2Gb RAM)  dual core CPU 15 GB HD Software requirements Please install Oracle VM VirtualBox 4.1.8 Follow-up and certification  With the workshop registration you agree to the following next steps Follow-up training attend and pass the Oracle Application Grid Certified Implementation Specialist Registration For details and registration please visit Register Here Free WebLogic Certification (Free assessment voucher to become certified) For all WebLogic experts, we offer free vouchers worth $195 for the Oracle Application Grid Certified Implementation Specialist assessment. To demonstrate your WebLogic knowledge you first have to pass the free online assessment Oracle Application Grid PreSales Specialist. For free vouchers, please send an e-mail with the screenshot of your Oracle Application Grid PreSales certirficate to [email protected] including your Name, Company, E-mail and Country. Note: This offer is limited to partners from Europe Middle East and Africa. Partners from other countries please contact your Oracle partner manager. WebLogic Specialization To become specialized in Application Grid, please make sure that you access the: Application Grid Specialization Guide Application Grid Specialization Checklist If you have any questions please contact the Oracle Partner Business Center. Oracle WebLogic Server 12c Key New Capabilities Java EE 6 and Developer Productivity Simplified Deployment and Management with Virtualization Integrated Traffic Management Enhanced High Availability and Disaster Recovery Much Higher Performance For more information please visit: Presentation from the WebLogic 12c launch Technical Presentation from the WebLogic 12c launch WebLogic OTN Website WebLogic 12c Virtual Conference Environment WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea (OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Best regards, Jürgen Kress WebLogic Partner Adoption EMEA Tel. +49 89 1430 1479 E-Mail: [email protected]   Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time. Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • Pirates, Treasure Chests and Architectural Mapping

    Pirate 1: Why do pirates create treasure maps? Pirate 2: I do not know.Pirate 1: So they can find their gold. Yes, that was a bad joke, but it does illustrate a point. Pirates are known for drawing treasure maps to their most prized possession. These documents detail the decisions pirates made in order to hide and find their chests of gold. The map allows them to trace the steps they took originally to hide their treasure so that they may return. As software engineers, programmers, and architects we need to treat software implementations much like our treasure chest. Why is software like a treasure chest? It cost money, time,  and resources to develop (Usually) It can make or save money, time, and resources (Hopefully) If we operate under the assumption that software is like a treasure chest then wouldn’t make sense to document the steps, rationale, concerns, and decisions about how it was designed? Pirates are notorious for documenting where they hide their treasure.  Shouldn’t we as creators of software do the same? By documenting our design decisions and rationale behind them will help others be able to understand and maintain implemented systems. This can only be done if the design decisions are correctly mapped to its corresponding implementation. This allows for architectural decisions to be traced from the conceptual model, architectural design and finally to the implementation. Mapping gives software professional a method to trace the reason why specific areas of code were developed verses other options. Just like the pirates we need to able to trace our steps from the start of a project to its implementation,  so that we will understand why specific choices were chosen. The traceability of a software implementation that actually maps back to its originating design decisions is invaluable for ensuring that architectural drifting and erosion does not take place. The drifting and erosion is prevented by allowing others to understand the rational of why an implementation was created in a specific manor or methodology The process of mapping distinct design concerns/decisions to the location of its implemented is called traceability. In this context traceability is defined as method for connecting distinctive software artifacts. This process allows architectural design models and decisions to be directly connected with its physical implementation. The process of mapping architectural design concerns to a software implementation can be very complex. However, most design decision can be placed in  a few generalized categories. Commonly Mapped Design Decisions Design Rationale Components and Connectors Interfaces Behaviors/Properties Design rational is one of the hardest categories to map directly to an implementation. Typically this rational is mapped or document in code via comments. These comments consist of general design decisions and reasoning because they do not directly refer to a specific part of an application. They typically focus more on the higher level concerns. Components and connectors can directly be mapped to architectural concerns. Typically concerns subdivide an application in to distinct functional areas. These functional areas then can map directly back to their originating concerns.Interfaces can be mapped back to design concerns in one of two ways. Interfaces that pertain to specific function definitions can be directly mapped back to its originating concern(s). However, more complicated interfaces require additional analysis to ensure that the proper mappings are created. Depending on the complexity some Behaviors\Properties can be translated directly into a generic implementation structure that is ready for business logic. In addition, some behaviors can be translated directly in to an actual implementation depending on the complexity and architectural tools used. Mapping design concerns to an implementation is a lot of work to maintain, but is doable. In order to ensure that concerns are mapped correctly and that an implementation correctly reflects its design concerns then one of two standard approaches are usually used. All Changes Come From ArchitectureBy forcing all application changes to come through the architectural model prior to implementation then the existing mappings will be used to locate where in the implementation changes need to occur. Allow Changes From Implementation Or Architecture By allowing changes to come from the implementation and/or the architecture then the other area must be kept in sync. This methodology is more complex compared to the previous approach.  One reason to justify the added complexity for an application is due to the fact that this approach tends to detect and prevent architectural drift and erosion. Additionally, this approach is usually maintained via software because of the complexity. Reference:Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons  

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • Activation Error while testing Exception Handling Application Block

    - by CletusLoomis
    I'm getting the following error while testing my EHAB implementation: {"Activation error occured while trying to get instance of type ExceptionPolicyImpl, key "LogPolicy""} System.Exception Stack Trace: StackTrace " at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.GetExceptionPolicy(Exception exception, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 131 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException(Exception exceptionToHandle, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 55 at Blackbox.Exception.ExceptionMain.LogException(Exception pException) in C:_Work_Black Box\Blackbox.Exception\ExceptionMain.vb:line 14 at BlackBox.Business.BusinessMain.TestExceptionHandling() in C:_Work_Black Box\BlackBox.Business\BusinessMain.vb:line 16 at Blackbox.Service.Service1.TestExceptionHandling() in C:_Work_Black Box\Blackbox.Service\Service.svc.vb:line 43" String Inner Exception: InnerException {"Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl", name = "LogPolicy". Exception occurred while: Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter). Exception is: ArgumentException - Event log names must consist of printable characters and cannot contain \, *, ?, or spaces At the time of the exception, the container was: Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl,LogPolicy Resolving parameter "policyEntries" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl(System.String policyName, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] policyEntries) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry,LogPolicy.All Exceptions Resolving parameter "handlers" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry(System.Type exceptionType, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.PostHandlingAction postHandlingAction, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] handlers, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Instrumentation.IExceptionHandlingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler,LogPolicy.All Exceptions.Logging Exception Handler (mapped from Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, LogPolicy.All Exceptions.Logging Exception Handler) Resolving parameter "writer" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler(System.String logCategory, System.Int32 eventId, System.Diagnostics.TraceEventType severity, System.String title, System.Int32 priority, System.Type formatterType, Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter writer) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl,LogWriter.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter, (none)) Resolving parameter "structureHolder" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl(Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder structureHolder, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator updateCoordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder,LogWriterStructureHolder.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder, (none)) Resolving parameter "traceSources" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder(System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.Filters.ILogFilter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] filters, System.Collections.Generic.IEnumerable1[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceSourceNames, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.LogSource, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] traceSources, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource allEventsTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource notProcessedTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource errorsTraceSource, System.String defaultCategory, System.Boolean tracingEnabled, System.Boolean logWarningsWhenNoCategoriesMatch, System.Boolean revertImpersonation) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogSource,General Resolving parameter "traceListeners" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogSource(System.String name, System.Collections.Generic.IEnumerable1[[System.Diagnostics.TraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceListeners, System.Diagnostics.SourceLevels level, System.Boolean autoFlush, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper,Event Log Listener (mapped from System.Diagnostics.TraceListener, Event Log Listener) Resolving parameter "wrappedTraceListener" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper(System.Diagnostics.TraceListener wrappedTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator coordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener,Event Log Listener?implementation (mapped from System.Diagnostics.TraceListener, Event Log Listener?implementation) Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter) "} System.Exception My web.config is as follows: <?xml version="1.0"?> <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> <section name="exceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Configuration.ExceptionHandlingSettings, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> </configSections> <loggingConfiguration name="" tracingEnabled="true" defaultCategory="General"> <listeners> <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" source="Enterprise Library Logging" formatter="Text Formatter" log="C:\Blackbox.log" machineName="." traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> </listeners> <formatters> <add type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" template="Timestamp: {timestamp}{newline}&#xA;Message: {message}{newline}&#xA;Category: {category}{newline}&#xA;Priority: {priority}{newline}&#xA;EventId: {eventid}{newline}&#xA;Severity: {severity}{newline}&#xA;Title:{title}{newline}&#xA;Machine: {localMachine}{newline}&#xA;App Domain: {localAppDomain}{newline}&#xA;ProcessId: {localProcessId}{newline}&#xA;Process Name: {localProcessName}{newline}&#xA;Thread Name: {threadName}{newline}&#xA;Win32 ThreadId:{win32ThreadId}{newline}&#xA;Extended Properties: {dictionary({key} - {value}{newline})}" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="General"> <listeners> <add name="Event Log Listener" /> </listeners> </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Event Log Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration> <exceptionHandling> <exceptionPolicies> <add name="LogPolicy"> <exceptionTypes> <add name="All Exceptions" type="System.Exception, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="NotifyRethrow"> <exceptionHandlers> <add name="Logging Exception Handler" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" logCategory="General" eventId="100" severity="Error" title="Enterprise Library Exception Handling" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" priority="0" /> </exceptionHandlers> </add> </exceptionTypes> </add> <add name="WcfExceptionShielding"> <exceptionTypes> <add name="InvalidOperationException" type="System.InvalidOperationException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="ThrowNewException"> <exceptionHandlers> <add type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.FaultContractExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" exceptionMessageResourceType="" exceptionMessageResourceName="This is the message" exceptionMessage="This is the exception" faultContractType="Blackbox.Service.WCFFault, Blackbox.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Fault Contract Exception Handler"> <mappings> <add source="{Guid}" name="Id" /> <add source="{Message}" name="MessageText" /> </mappings> </add> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> <connectionStrings> <add name="CompassEntities" connectionString="metadata=~\bin\CompassModel.csdl|~\bin\CompassModel.ssdl|~\bin\CompassModel.msl;provider=Devart.Data.Oracle;provider connection string=&quot;User Id=foo;Password=foo;Server=foo64mo;Home=OraClient11g_home1;Persist Security Info=True&quot;" providerName="System.Data.EntityClient" /> <add name="BlackboxEntities" connectionString="metadata=~\bin\BlackboxModel.csdl|~\bin\BlackboxModel.ssdl|~\bin\BlackboxModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=sqldev1\cps;Initial Catalog=FundServ;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> My code is as follows: Public Shared Function LogException(ByVal pException As System.Exception) As Boolean Return ExceptionPolicy.HandleException(pException, "LogPolicy") End Function Any assistance is appreciated.

    Read the article

  • Converting a byte array to a X.509 certificate

    - by ddd
    I'm trying to port a piece of Java code into .NET that takes a Base64 encoded string, converts it to a byte array, and then uses it to make a X.509 certificate to get the modulus & exponent for RSA encryption. This is the Java code I'm trying to convert: byte[] externalPublicKey = Base64.decode("base 64 encoded string"); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(externalPublicKey); Key publicKey = keyFactory.generatePublic(publicKeySpec); RSAPublicKey pbrtk = (java.security.interfaces.RSAPublicKey) publicKey; BigInteger modulus = pbrtk.getModulus(); BigInteger pubExp = pbrtk.getPublicExponent(); I've been trying to figure out the best way to convert this into .NET. So far, I've come up with this: byte[] bytes = Convert.FromBase64String("base 64 encoded string"); X509Certificate2 x509 = new X509Certificate2(bytes); RSA rsa = (RSA)x509.PrivateKey; RSAParameters rsaParams = rsa.ExportParameters(false); byte[] modulus = rsaParams.Modulus; byte[] exponent = rsaParams.Exponent; Which to me looks like it should work, but it throws an exception when I use the base 64 encoded string from the Java code to generate the X509 certificate. Is Java's X.509 implementation just incompatible with .NET's, or am I doing something wrong in my conversion from Java to .NET? Or is there simply no conversion from Java to .NET in this case?

    Read the article

  • PHP Codeigniter error: call to undefined method ci_db_mysql_driver::result()

    - by Ronnie
    I was trying to create an xml response using codeigniter. The following error gets thrown when i run the code. This page contains the following errors: error on line 1 at column 48: Extra content at the end of the document <?php class Api extends CI_Controller{ function index() { $this->load->helper('url', 'xml', 'security'); echo '<em>oops! no parameters selected.</em>'; } function authorize($email = 'blank', $password = 'blank') { header ("content-type: text/xml"); echo '<?xml version="1.0" encoding="ISO-8859-1"?>'; echo '<node>'; if ($email == 'blank' AND $password == 'blank') { echo '<response>failed</response>'; } else { $this->db->where('email_id', $email); $this->db->limit(1); $query = $this->db->from('lp_user_master'); $this->get(); $count = $this->db->count_all_results(); if ($count > 0) { foreach ($query->result() as $row){ echo '<ip>'.$row->title.'</ip>'; } } } echo '</node>'; } } ?>

    Read the article

  • Installing VSTO 4.0 Causes VSTO 3.0 Addin to quit working

    - by Jacob Adams
    I just installed Visual Studio 2010 yesterday. As part of that I installed VSTO 4.0. Now when I run any Office application, my VSTO 3.0 addins fail to load. The error in the event log is Customization URI: file:///H:/PathToMyAddin/MyAddin.vsto Exception: Customization does not have the permissions required to create an application domain. ***** Exception Text ******* Microsoft.VisualStudio.Tools.Applications.Runtime.CannotCreateCustomizationDomainException: Customization does not have the permissions required to create an application domain. --- System.Security.SecurityException: Customized functionality in this application will not work because the administrator has listed file:///H:/PathToMyAddin/MyAddin.vsto as untrusted. Contact your administrator for further assistance. at Microsoft.VisualStudio.Tools.Office.Runtime.RuntimeUtilities.VerifySolutionUri(Uri uri) at Microsoft.VisualStudio.Tools.Office.Runtime.DomainCreator.CreateCustomizationDomainInternal(String solutionLocation, String manifestName, String documentName, Boolean showUIDuringDeployment, IntPtr hostServiceProvider, IntPtr& executor) The Zone of the assembly that failed was: MyComputer It seems like like maybe this is due to it trying to load different version of .NET is the same process/AppDomain. However the error would indicate it's some sort of permissions issue.

    Read the article

  • Refresh RadGridview when Insert,Update and Delete Operation done on Database in WPF

    - by patelriki13
    WPF and C#: Problem: 1. How to Refresh Radgridview when i Insert,update and Delete Record in database anrecord. 2.when i am Insert or Update Record than in radgridview that row is selected. i am useing sql server 2005. i am use to set data source of radgridview like " radgridview1.ItemsSource = ds; " == ds is dataset. i am beginner so if possible than tel me by code it is easy to understand....... can u help me as early as possible .... i give some code which i am useing for update RadGridview con.ConnectionString = @"Data Source=(local);Initial Catalog=DigiDms;Integrated Security=True"; cmd1.Connection = con; con.Open(); cmd1.CommandType = CommandType.StoredProcedure; cmd1.CommandText = "Pro_Insurance_Master_Select"; da1.SelectCommand = cmd1; da1.Fill(ds1); con.Close(); //dataGrid.clear(); //dsGrid.Reset(); //dsGrid = dataGrid.GetData("Pro_Insurance_Master_Select"); //set datasource of gridview gridShowData.ItemsSource = null; gridShowData.ItemsSource = ds1; doing this , when i am delete or update record than folloning error generated... Error: "Object reference not set to an object" when i am doing the "gridShowData.ItemsSource = null;" and when i am doing insert operation than this error is not generated and RadGridview also updated..... so pls help me as early as possible.... i am beginer ........ my email address is [email protected]

    Read the article

  • Any alternative to hide querystring from Html.actionlink on ASP.NET MVC Page?

    - by Madhavi
    Hi I have a page called SearchDcouments.aspx that displays all the matching documents as Hyperlinks in the table format as below. When the User clicks on any particular document, the sample url will be: http://localhost:52483/Home/ShowDocument?docID=280 So that the DocumentID is passed as Querystring to the ControllerMethod ShowDocument and the ID is visible in the URL. Now for Security purposes, I want to hide this way of passing the Querystring parameters. Wondering what are the alternatives to hide the DocID from the URL? Appreciate your responses. Thanks Code in the View: <tbody> <% foreach (var item in Model){ %> <tr> <% string actionTitle = item.DocumentType.ToLower() == "letter" ? "Request References" : "Request Slides"; %> <td> <%= Html.ActionLink(actionTitle, MVC.Home.ShowDocument(item.DocumentID))%> </td> </tr> <% } %> </tbody> Code in the Controller: [Authorize] [HttpGet] public virtual ActionResult ShowDocument(int docID) { Document document = miEntity.GetDocumentByID(docID); switch (document.DocType.ToLower()) { case "slide": SlideRequestViewModel slide = new SlideRequestViewModel(docID); return View(MVC.Home.Views.ShowSlideRequest, slide); case "letter": RefRequestViewModel rrq = new RefRequestViewModel(docID); return DoShowRefRequest(rrq); default: break; } // Here - let's get back home return RedirectToAction(MVC.Home.Default()); }

    Read the article

  • How to implement Gmail OAuth API to send email (especially via SMTP)?

    - by Curtis Gibby
    I'm developing a web application that will send emails on behalf of a logged-in user. I'm trying to use the new Gmail OAuth protocol announced described here to send these emails through the user's Gmail account (preferably using SMTP rather than IMAP, but I'm easy). However, the sample PHP code gives me a couple of problems. All of the sample code is based on IMAP, not SMTP. Why "support" the SMTP protocol if you're not going to show people how to use it? The sample code gives me a fatal error from an uncaught Zend exception -- it can't find the "INBOX" folder. Fatal error: Uncaught exception 'Zend_Mail_Storage_Exception' with message 'cannot change folder, maybe it does not exist' in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php:467 Stack trace: #0 path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php(248): Zend_Mail_Storage_Imap-selectFolder('INBOX') #1 path\to\xoauth-php-samples\three-legged.php(184): Zend_Mail_Storage_Imap-__construct(Object(Zend_Mail_Protocol_Imap)) #2 {main} Next exception 'Zend_Mail_Storage_Exception' with message 'cannot select INBOX, is this a valid transport?' in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php:254 Stack trace: #0 path\to\xoauth-php-samples\three-legged.php(184): Zend_Mail_Storage_Imap-__construct(Object(Zend_Mail_Protocol_Imap)) #1 {main} in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php on line 254 I've verified that I'm getting good OAuth tokens back, I just don't know how to make the actual email transaction happen. This protocol is still rather new, so there's not much unofficial community documentation about it out there, and the official docs are unhelpfully dry stuff about the SMTP RFC. So if anyone can help get this going, I'd greatly appreciate it. Note: I've already been able to connect to Gmail's SMTP server via SSL and successfully send an email, provided that the user has given my application his/her Gmail username and password. I'd like to avoid this method, because it encourages phishing and security-minded users won't accept it. This question is not about that.

    Read the article

  • How to Authenticate to Active Directory Services (ADs) using .NET 3.5 / C#

    - by Ranger Pretzel
    After much struggling, I've figured out how to authenticate to my company's Active Directory using just 2 lines of code with the Domain, Username, and Password in .NET 2.0 (in C#): // set domain, username, password, and security parameters DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, username, password, AuthenticationTypes.Secure | AuthenticationTypes.SecureSocketsLayer); // force Bind to AD server to authenticate object obj = entry.NativeObject; If the 2nd line throws an exception, then the credentials and/or parameters were bad. (Specific reason can be found in the exception.) If no exception, then the credentials are good. Trying to do this in .NET 3.5 looks like it should be easy, but has me at a roadblock instead. Specifically, I've been working with this example: PrincipalContext domainContext = new PrincipalContext(ContextType.Domain, domain); using (domainContext) { return domainContext.ValidateCredentials(UserName, Password); } Unfortunately, this doesn't work for me as I don't have both ContextOptions set to Sealed/Secure and SSL (like I did above in the .NET 2.0 code.) There is an alternate constructor for PrincipalContext that allows setting the ContextOptions, but this also requires supplying a Distinguished Name (DN) of a Container Object and I don't know exactly what mine is or how I would find out. public PrincipalContext(ContextType contextType, string name, string container, ContextOptions options); // container: // The container on the store to use as the root of the context. All queries // are performed under this root, and all inserts are performed into this container. // For System.DirectoryServices.AccountManagement.ContextType.Domain and System.DirectoryServices.AccountManagement.ContextType.ApplicationDirectory // context types, this parameter is the distinguished name of a container object. Any suggestions?

    Read the article

  • System.Net.WebClient doesn't work with Windows Authentication

    - by Peter Hahndorf
    I am trying to use System.Net.WebClient in a WinForms application to upload a file to an IIS6 server which has Windows Authentication as it only 'Authentication' method. WebClient myWebClient = new WebClient(); myWebClient.Credentials = new System.Net.NetworkCredential(@"boxname\peter", "mypassword"); byte[] responseArray = myWebClient.UploadFile("http://localhost/upload.aspx", fileName); I get a 'The remote server returned an error: (401) Unauthorized', actually it is a 401.2 Both client and IIS are on the same Windows Server 2003 Dev machine. When I try to open the page in Firefox and enter the same correct credentials as in the code, the page comes up. However when using IE8, I get the same 401.2 error. Tried Chrome and Opera and they both work. I have 'Enable Integrated Windows Authentication' enabled in the IE Internet options. The Security Event Log has a Failure Audit: Logon Failure: Reason: An error occurred during logon User Name: peter Domain: boxname Logon Type: 3 Logon Process: ÈùÄ Authentication Package: NTLM Workstation Name: boxname Status code: 0xC000006D Substatus code: 0x0 Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 127.0.0.1 Source Port: 1476 I used Process Monitor and Fiddler to investigate but to no avail. Why would this work for 3rd party browsers but not with IE or System.Net.WebClient?

    Read the article

  • Testing movie with Flash IDE fails to load file from localhost

    - by davgothic
    Hi, I'm just wondering if anybody can help me with my simple but frustrating problem. I have created an SWF that loads an XML file from http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml, but I get this error when running the movie using Test Movie in the Flash IDE. Error #2044: Unhandled ioError:. text=Error #2032: Stream Error. URL: http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml at Main/loadConfig()[D:\www\webroot\flash\Projects\MEL\Quiz\Quiz\src\Main.as:126] at Main/configLoadError()[D:\www\webroot\flash\Projects\MEL\Quiz\Quiz\src\Main.as:143] at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/onComplete() The error I get if I handle the exception is: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL: http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml"] Trouble is running the SWF in a browser locally does work, it only throws these errors in the Flash IDE. I have tried a adding wildcard crossdomain.xml file in my root web directory and setting the SWF publish properties for local playback security to Allow network only, but neither of these have solved my problem. I know Windows 7 handles localhost name resolution differently compared to previous versions of Windows but I have even added 127.0.0.1 localhost to my hosts file to no avail. Can anyone shed any light on this issue?

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • Using Ghostscript in a Webapplication (PDF Thumbnails)

    - by cpt.oneeye
    Hello, i am using the ghostscriptsharp wrapper for c# and ghostscript. I want to generate thumbnails out of pdf-files. Further Information on the sample-code are given here. There are different Methods imported form the ghostscript-c-dll "gsdll32.dll". [DllImport("gsdll32.dll", EntryPoint = "gsapi_new_instance")] private static extern int CreateAPIInstance(out IntPtr pinstance, IntPtr caller_handle); [DllImport("gsdll32.dll", EntryPoint = "gsapi_init_with_args")] private static extern int InitAPI(IntPtr instance, int argc, IntPtr argv); //...and so on I am using the GhostscriptWrapper for generating the thumbnails in a webapplication (.net 2.0). This class uses the methods imported above. protected void Page_Load(object sender, EventArgs e){ GhostscriptWrapper.GeneratePageThumb("c:\\sample.pdf", "c:\\sample.jpg", 1, 100, 100); } When i debug the Web-Application in Visual Studio 2008 by hitting key "F5" it works fine (a new instance of webserver is generated). When i create a WindowsForm Application it works too. The thumbnails get generated. When i access the application with the webbrowser directly (http://localhoast/mywebappliation/..) it doesn't work. No thumbnails are generated. But there is also no exception thrown. I placed the gsdll32.dll in the system32-folder of windows xp. The Ghostscript Runtime is installed too. I have given full access in the IIS-Webproject (.Net 2.0). Does anybody know why i can't access Ghostscript from my webapplication? Are there any security-issues for accessing dll-files on the IIS-Server? Greetings Klaus

    Read the article

  • java.lang.NoSuchMethodException: javax.ejb.EJBHome.getHomeHandle()

    - by brianegge
    I'm trying to figure out why I'm getting the following exception when a client app is connecting to JBoss. I happens on startup, when the client attempts to connect to the server. java.lang.ExceptionInInitializerError at org.jboss.proxy.ejb.HomeInterceptor.<clinit>(HomeInterceptor.java:77) at sun.misc.Unsafe.ensureClassInitialized(Native Method) at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:25) at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:122) at java.lang.reflect.Field.acquireFieldAccessor(Field.java:917) at java.lang.reflect.Field.getFieldAccessor(Field.java:898) at java.lang.reflect.Field.getLong(Field.java:527) at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1559) at java.io.ObjectStreamClass.access$600(ObjectStreamClass.java:47) at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:381) at java.security.AccessController.doPrivileged(Native Method) at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:373) at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:268) at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:504) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1546) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1460) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1693) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:339) at org.jboss.proxy.ClientContainer.readExternal(ClientContainer.java:142) at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1753) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1711) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1912) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1836) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1713) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:339) at java.rmi.MarshalledObject.get(MarshalledObject.java:135) at org.jnp.interfaces.MarshalledValuePair.get(MarshalledValuePair.java:57) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:637) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:572) at javax.naming.InitialContext.lookup(InitialContext.java:351) at ...start of my code... Caused by: java.lang.NoSuchMethodException: javax.ejb.EJBHome.getHomeHandle() at java.lang.Class.getMethod(Class.java:1581) at org.jboss.proxy.ejb.HomeInterceptor.<clinit>(HomeInterceptor.java:64) ... 39 more

    Read the article

  • Unable to debug WCF service in VS2008 after UserNamePasswordValidator fault

    - by lsb
    Hi! I have a WCF service that I secure with a custom UserNamePasswordValidator and Message security running over wsHttpBinding. The release code works great. Unfortunately, when I try to run in debug mode after having previously used invalid credentials (the current credentials ARE valid!) VS2008 displays an annoying dialog box (more on this below). A simplified version of my Validate method from the validator might look like the following: public override void Validate(string userName, string password) { if (password != "ABC123") throw new FaultException("The password is invalid!"); } The client receives a MessageSecurityException with InnerException set to the FaultException I explictly threw. This is workable since my client can display the message text of the original FaultException I wanted the user to see. Unfortunately, in all subsequent service calls VS2008 displays an "Unable to automatically debug..." dialog. The only way I can stop this from happening is to exit VS2008, get back in and connect to my service using correct credentials. I should also add that this occurs even when I create a brand new proxy on each and every call. There's no chance MY channel is faulted when I make a call. Its likely, however, that VS2008 hangs on to the previously faulted channel and tries to use it for debugging purposes. Needless to say, this sucks! The entire reason I'm entering "bad" credentials is to test the "bad-credential" handling. Anyway, if anyone has any ideas as to how I can get around this bug (?!?) I'd be very very appreciative....

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >