Search Results

Search found 2981 results on 120 pages for 'recommended'.

Page 111/120 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • ?RAC????????????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} ????????????????????????????????????????,??????????????Oracle RAC?????????????????????????????,???????????????????,??????RAC???????????,????????????????????????????????????,????3???RAC????????????? ????????MOS ??"Top 11 Things to do NOW to Stabilize your RAC Cluster Environment”(DOC ID 1344678.1)???,???,??????3???????????,?????????????????????????????,???,?????????????????,??,??????????????,??????????????????????,?????????????,??????????????????????,???????RAC DBA???? ??????? (PSU)??,?????????PSU? ???????????,???????Oracle???????????(PSU)???PSU?????????????????,??PSU???????????????????PSU????????,???????????????PSU,????????6????????????????????BUG????,??????????,?????????????????????,???9???,???RAC???(Cluster)BUG,??7%??BUG??????,??????????BUG??????????????????????PSU??????RAC???,PSU????????: PSU?????Grid Infrastructure(GI)home,???????????RDBMS home???????,??GI home????PSU,?????home?????,??????????GI????????????,??????,??RDBMS PSU,GI PSU??????????GI home??????PSU,???????RDBMS??PSU? RAC????PSU????rolling????? –?????????GI? RDBMS?????????????????,??PSU???????,???????????????? ???????????PSU,????????????,?????????PSU????,???RAC?????????????PSU???,???????????????????? ??PSU?????, ??????MOS??: NOTE 854428.1   Intro to Patch Set Updates (PSU) NOTE 1082394.1 11.2.0.X Grid Infrastructure PSU Known Issues NOTE 756671.1   Oracle Recommended Patches -- Oracle Database NOTE 161549.1   Oracle Database, Networking and Grid Agent Patches for Microsoft Platforms NOTE 810394.1   RAC and Oracle Clusterware Best Practices and Starter Kit 11gR2???????,?Diagwait???13? ?2012?,??45%????????11gR2???????,????diagwait?13????RAC???????????,????diagwait??????????????,????????????????, diagwait??RAC?????????????: ?????,??????OPROCD?????1??0.5?????,????,??OPROCD??? 1.5????,?????????diagwait????13??OPROCD??????????10?( diagwait - CSS????[???3?]),????????OPROCD???????????????'?'?????????????,1.5??????????????????OPROCD?????????????11?(1?????+10????)? ?????/???????,??diagwait,??????????????????????,??,????????????? ?11g?2?(11.2.0.1?????)??,?????????????,???????,??????????????????,????????????????,?????????????????????diagwait????????,????????????????????,????????Oracle?????(OCR),?????????OCR???????????,?????????diagwai?????????????????: # $CLUSTERWARE_HOME\bin\crsctl get css diagwait ????DIAGWAIT???,??????MOS??: NOTE 567730.1  Changes in Oracle Clusterware on Linux with the 10.2.0.4 Patchset NOTE 559365.1  Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions NOTE 810394.1 RAC and Oracle Clusterware Best Practices and Starter Kit ??OS Watcher Black Box(OSWbb) ? Cluster Health Monitor(CHM) ????????OS??????????????,??,??????OS Watcher Black Box(OSWbb)(??OS Watcher)?Cluster Health Monitor(CHM)????????OS???,??DBA????????????????????????????,?????????????,??????????,?????????????????????????OS????????,????????????,???????????????????? OSWbb?????????,??????,????OS??????????????,????OS??????OSWbb???????: ?????,??30??????????OS?????????????(??5??)????????????????????,?1???5????????????????????????30????????,Oracle???????????????OS?????????????,Oracle??????OSWbb?20???????? OSWbb?????????????????Oracle???????????????????OS????,??,?????????????????????????Oracle???????,?????????????,????????????????? ???11.2.0.3??,??????(HP-UX??)?,Oracle GI?????????,Cluster Health Monitor (CHM)?CHM??????,?????OSW????,??,???????OSW????,?????????? Oracle??????????????????OSWbb?/?CHM,?????????,????????????????????,??????????OSWbb,???????????RAC??,??????????????????(???NOTE 580513.1“How To Start OS Watcher Black Box Every System Boot”??????)? ??OSWbb?CHM?????, ??????MOS??: NOTE 301137.1   OS Watcher Black Box User Guide NOTE 1328466.1 Cluster Health Monitor (CHM) FAQ NOTE 810394.1   RAC and Oracle Clusterware Best Practices and Starter Kit ?? ?????????RAC/ Oracle?????????????3???????????3?,?????RAC??????,?????????????????,?????MOS??: NOTE 1344678.1 Top 11 Things to do NOW to Stabilize your RAC Cluster Environment ????,???MOS-RAC/Scalability community??,?Oracle???????????,????RAC/ Oracle?????

    Read the article

  • OpenVPN Error : TLS Error: local/remote TLS keys are out of sync: [AF_INET]

    - by Lucidity
    Fist off thanks for reading this, I appreciate any and all suggestions. I am having some serious problems reconnecting to my OpenVPN client using Riseup.net's VPN. I have spent a few days banging my head against the wall in attempts to set this up on my iOS devices....but that is a whole other issue. I was however able to set it up on my Mac OS X specifically on my Windows Vista 32 bit BootCamp VM with relatively little trouble. To originally connect I only had to modify the recommended Config file very slightly (Config file included at the end of this post): - I had to enter the code directly into my config file - And change "dev tap" to "dev tun" So I was connected. (Note - I did test to ensure the VPN was actually working after I originally connected, it was. Also verified the .pem file (inserted as the coding in my config file) for authenticity). I left the VPN running. My computer went to sleep. Today I went to use the internet expecting (possibly incorrectly - I am now unsure if I was wrong to leave it running) to still be connected to the VPN. However I saw immediately I was not. I went to reconnect. And was (am) unable to. My logs after attempting to connect (and getting a connection failed dialog box) show everything working as it should (as far as I can tell) until the end where I get the following lines: Mon Sep 23 21:07:49 2013 us=276809 Initialization Sequence Completed Mon Sep 23 21:07:49 2013 us=276809 MANAGEMENT: >STATE:1379995669,CONNECTED,SUCCESS, OMITTED Mon Sep 23 21:22:50 2013 us=390350 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 21:23:39 2013 us=862180 TLS Error: local/remote TLS keys are out of sync: [AF_INET] VPN IP OMITTED [2] Mon Sep 23 21:23:57 2013 us=395183 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 22:07:41 2013 us=296898 TLS: soft reset sec=0 bytes=513834601/0 pkts=708032/0 Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=1, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=Riseup Networks, [email protected] Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=0, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=vpn.riseup.net Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA So I have searched for a solution online and I have included what I have attempted below, however I fear (know) I am not knowledgeable enough in this area to fix this myself. I apologize in advance for my ignorance. I do tech support for a living, but not this kind of tech support unfortunately. Other notes and troubleshooting done - - Windows Firewall is disabled completely, as well as other Anti-virus programs - Tor is disabled completely - No Proxies running - Time is correct in all locations - Router Firmware is up to date - Able to connect to the internet and as far as I can tell all necessary ports are open. - No settings have been altered since I was able to connect successfully. - Ethernet as well as wifi connections attempted, resulted in same error. Also tried adding the following lines to my config file (without success or change in error): persist-key persist-tun proto tcp (after reading that this error generally occurs on UDP connections, and is extremely rare on TCP) resolv-retry infinite (thinking the connection may have timed out since the issues occurred after leaving VPN connected during about 10 hrs of computer in sleep mode) All attempts resulted in exact same error code included at the top of this post. The original suggestions I found online stated - (regarding the TLS Error) - This error should resolve itself within 60 seconds, or if not quit wait 120 seconds and try again. (Which isnt the case here...) (regarding the Out of Sync" error) - If you continue to get "out of sync" errors and the link does not come up, then it means that something is probably wrong with your config file. You must use either ping and ping-restart on both sides of the connection, or keepalive on the server side of a client/server connection, in order to gracefully recover from "local/remote TLS keys are out of sync" errors. I wouldn't be surprised if my config file is lacking, or not correct. However I can confirm I followed the instructions to a tee. And was able to connect originally (and have not modified my settings or config file since I was able to connect to when the error began occurring). I have a very simple config file: client dev tun tun-mtu 1500 remote vpn.riseup.net auth-user-pass ca RiseupCA.pem redirect-gateway verb 4 <ca> -----BEGIN CERTIFICATE----- [OMITTED] -----END CERTIFICATE----- </ca> I would really appreciate any help or suggestions. I am at a total loss here, I know I'm asking a lot here. Though I am a new user on this site I help others on many forums including Microsoft's support community and especially Apple's support communities, so I will definitely pass on anything I learn here to help others. Thanks so so so much in advance for reading this.

    Read the article

  • Can't get FTP to work on centOS 5.6

    - by josi
    Hi guys I have been trying for a few hours to install and get FTP to work... I did yum install ftp and yum install vsftpd They all installed and are running but when I try to use filezilla or some other client I just can't connect....I've tried connecting on port 21 and port 990 ....nothing! These are my iptables # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 110 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 990 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 465 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 646 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 993 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 995 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 10009 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 7778 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5000 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 25566 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8765 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8192 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8123 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 23877 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 9091 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 51413 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 10011 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 30033 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT Any help would be much appreciated! If I do lsof -i :21 without the "." it shows nothing. [root@ks3000420 ~]# lsof -i :21 . COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 9964 root cwd DIR 8,1 4096 483329 . bash 11608 root cwd DIR 8,1 4096 483329 . bash 13550 root cwd DIR 8,1 4096 483329 . vi 14117 root cwd DIR 8,1 4096 483329 . sftp-serv 15261 root cwd DIR 8,1 4096 483329 . sftp-serv 15477 root cwd DIR 8,1 4096 483329 . bash 19074 root cwd DIR 8,1 4096 483329 . lsof 19100 root cwd DIR 8,1 4096 483329 . lsof 19101 root cwd DIR 8,1 4096 483329 .

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • straight to grub prompt on boot

    - by cheshirekow
    I am very lost. I did a fresh install of Ubuntu 10.04 on a laptop. First reboot was fine. I ran all the recommended upgrades, and now every time I start I get just a grub>_ prompt. No error message, just the prompt, and a little banner at the top saying grub's version and telling me that I have minimal bash style editing. I've tried: 1) Re-installing grub via sudo grub-install sda (There is only one disk with only two partitions, one primary, and one for swap) 2) Changed GRUB_HIDDEN_TIMEOUT=10 GRUB_TIMEOUT=30 GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=90" GRUB_CMDLINE_LINUX="rootdelay=90" in /etc/default/grub. No luck. I can boot with the following: grub> set root=(hd0,1) grub> probe (hd0,1) -u c00fadde-f7e8-45e7-a4da-0235605f756 grub> linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235605f756 rootdelay=90 grub> initrd /boot/initrd.img-2.6.32-21-generic grub> boot And then everything seems to be fine from there. From the grub prompt if I try configfile /boot/grub/grub.cfg The screen clears and I get another grub prompt. So, seriously, what could the problem be? edit: Full text of /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=30 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235c605f756 ro rootdelay=90 rootdelay=90 initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=c00fadde-f7e8-45e7-a4da-0235c605f756 ro single rootdelay=90 echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd0,1)' search --no-floppy --fs-uuid --set c00fadde-f7e8-45e7-a4da-0235c605f756 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ ${timeout} != -1 ]; then if sleep --verbose --interruptible 10 ; then set timeout=0 fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### output of update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-21-generic Found initrd image: /boot/initrd.img-2.6.32-21-generic Found memtest86+ image: /boot/memtest86+.bin done contents of /boot total 14280 -rw-r--r-- 1 root root 640617 2010-04-16 09:01 abi-2.6.32-21-generic -rw-r--r-- 1 root root 115847 2010-04-16 09:01 config-2.6.32-21-generic drwxr-xr-x 3 root root 4096 2010-09-08 02:42 grub -rw-r--r-- 1 root root 7968754 2010-09-02 01:49 initrd.img-2.6.32-21-generic -rw-r--r-- 1 root root 160280 2010-03-23 05:37 memtest86+.bin -rw-r--r-- 1 root root 1687378 2010-04-16 09:01 System.map-2.6.32-21-generic -rw-r--r-- 1 root root 1196 2010-04-16 09:03 vmcoreinfo-2.6.32-21-generic -rw-r--r-- 1 root root 4029792 2010-04-16 09:01 vmlinuz-2.6.32-21-generic

    Read the article

  • How to convert html textfield/area data to server-side txt file? [closed]

    - by olijake
    How can I make a script that will convert the text/data in a html textfield/textarea and send it to the server, which then saves it as a .txt file for storage? NOTE: I am hosting a website(for testing purposes) using Apache 2.2 on a Windows 7 machine. I downloaded PHP version 5.4.7, but have not yet installed on my server yet (not sure if I will need it, but also not sure how to install it). 1st problem: Saving text to server Html page/section with title textfield, text textarea, and submit button. You would enter a title, the text/notes you need in the textfield, then press the submit button to have it store the text in the textarea, as a .txt file on the server called .txt. 2nd problem: Opening text from server Html with list of all txt files OR textfield for entering in title, then submit button to send the title of the requested .txt file to the server, which would then load it up on the page. Here is what I have so far: (let me know if there is something that I should change or if something just isn't correct in the index.html code I have right now.) <!DOCTYPE HTML> <html> <head> <title>Insert Title</title> <meta http-equiv="Content-Type" content="Text/HTML; charset=UTF-8"/> </head> <body> <form method="post" action="save.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <textarea rows="20" cols="100" id="text" name="text"></textarea><br/> <input type="submit" name="submit" value="Submit Text to Server"> </form><br/> <hr style="width: 100%; height: 4px;"><br/> <form method="post" action="open.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <input type="submit" name="submit" value="Submit Txt File Request"> </form><br/> <div>Opened text file displays here or goes on another page</div> </body> </html> I plan on using a server side language/script, but ANYTHING that gets the job done is fine. I already tried looking into using some ASP/jScript/PHP, but have had some trouble implementing it into my server. (ie: getting the modules loaded and telling the server what file types to parse.) I know this may be an extremely easy fix, but then in that case, hopefully you wouldn't mind helping me out a little :). If it turns out that this is MUCH more complicated than I expect, then feel free to let me know that, so I don't waste me time running in circles. I appreciate any help/assistance that you can provide, Thanks, Jake EDIT: Wrong Apache version. In response to the comments/closing of this thread: My question: "How exactly do I install the PHP module on the apache server? and is this even possible? and is this even recommended?" ^ In case I wasn't clear enough already To Clarify: I understand the basics of PHP, I just have trouble with INSTALLING PHP on the apache server. (I have used PHP before, but never successfully on apache (so far...)) For my script I wrote something similar to this already (using fopen() and a few other commands): <?php fopen("notes.txt", "r"); file_put_contents("notes.txt",teststring1); ?> I have used javascript for this task before also (although I prefer using PHP and server-side languages): <script language="javascript"> function WriteToFile(){ var fso = new ActiveXObject("Scripting.FileSystemObject"); var s = fso.CreateTextFile("C:\\NewFile.txt", true); var text=document.getElementById("TextArea1").innerText; s.WriteLine(text); s.WriteLine('***********************'); s.Close(); } </script>

    Read the article

  • My linux server takes more than an hour to boot. Suggestions?

    - by jamieb
    I am building a CentOS 5.4 system that boots off a compact flash card using a card reader that emulates an IDE drive. It literally takes about an hour to boot. The ultra-slow part occurs when Grub is loading the kernel. Once that's done, the rest of the boot process only takes about a minute to get to a login prompt. Does anyone have any suggestions? I suspect that it may have to do with UDMA. Everything IDE-related in my BIOS seems to checkout. The read performance hdparm is telling me 1.77 MB/s. Ouch! (But even at that rate, it still shouldn't take an hour to decompress and load the kernel) [root@server ~]# hdparm -tT /dev/hdc /dev/hdc: Timing cached reads: 2444 MB in 2.00 seconds = 1222.04 MB/sec Timing buffered disk reads: 6 MB in 3.39 seconds = 1.77 MB/sec Trying to enable DMA is a no-go though: [root@server ~]# hdparm -d1 /dev/hdc /dev/hdc: setting using_dma to 1 (on) HDIO_SET_DMA failed: Operation not permitted using_dma = 0 (off) Here's some command outputs that might help: System [root@server ~]# uname -a Linux server.localdomain 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:47:32 EDT 2009 i686 i686 i386 GNU/Linux PCI info: [root@server ~]# lspci -v 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) Subsystem: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation 82945G/GZ Integrated Graphics Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at fdf00000 (32-bit, non-prefetchable) [size=512K] I/O ports at ff00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fdf80000 (32-bit, non-prefetchable) [size=256K] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 2 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at fe00 [size=32] 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 Flags: bus master, medium devsel, latency 0, IRQ 17 I/O ports at fd00 [size=32] 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at fc00 [size=32] 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at fb00 [size=32] 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fdfff000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fdd00000 Capabilities: [50] #0d [0000] 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) (prog-if 80 [Master]) Subsystem: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 17 I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at f800 [size=16] Capabilities: [70] Power Management version 2 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) Subsystem: Intel Corporation 82801G (ICH7 Family) SMBus Controller Flags: medium devsel, IRQ 17 I/O ports at 0500 [size=32] 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 18 I/O ports at de00 [size=256] Memory at fdeff000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 17 I/O ports at dc00 [size=256] Memory at fdefe000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 19 I/O ports at da00 [size=256] Memory at fdefd000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 hdparm ouput: [root@server ~]# hdparm /dev/hdc /dev/hdc: multcount = 0 (off) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 8146/16/63, sectors = 8211168, start = 0 [root@server ~]# hdparm -I /dev/hdc /dev/hdc: ATA device, with non-removable media Model Number: InnoDisk Corp. - iCF4000 4GB Serial Number: 20091023AACA70000753 Firmware Revision: 081107 Standards: Supported: 5 Likely used: 6 Configuration: Logical max current cylinders 8146 8146 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 8211168 LBA user addressable sectors: 8211168 device size with M = 1024*1024: 4009 MBytes device size with M = 1000*1000: 4204 MBytes (4 GB) Capabilities: LBA, IORDY(can be disabled) Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 2 Current = 2 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * Power Management feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * CFA feature set * Mandatory FLUSH_CACHE HW reset results: CBLID- above Vih Device num = 0 CFA power mode 1: enabled and required by some commands Maximum current = 100ma Checksum: correct

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • Windows 7 cannot join samba domain

    - by Antonis Christofides
    I have a 3.5.6 samba server with a LDAP backend (both on Debian 6.0). I've been successfully adding Windows XP machines to the domain for years. I now try to add Windows 7. I have made the recommended registry changes, but I don't have any success so far. Here is what happens: 1. I go to computer name, select "Domain" instead of "Workgroup", type in the domain name, click OK. It asks me for the username and password of an account that can add computers to the domain; I enter them. After about 40 seconds, I get the following message: The following error occurred attempting to join the domain "ITIA": The specified computer account could not be found. Contact an administrator to verify the account is in the domain. If the account has been deleted unjoin, reboot, and rejoin the domain. Despite this, the samba server successfully creates the computer account. 2. Therefore, if I try again a second time, without deleting the already created computer account, I get a different error: The following error occurred attempting to join the domain "ITIA": The specified account already exists. (Note that until a while ago samba wasn't configured to automatically create computer accounts. What I did whenever I wanted an XP to join was to manually create it. When I first attempted to solve the Windows 7 join problem, I setup samba to do this automatically, as this is what most people do, as I understand, and I thought that it might be related. I haven't attempted to add an XP since I made this change, so I don't know if it works, but whether it works or not, the problem remains.) Update 1: Here are the relevant parts of smb.conf: [global] panic action = /usr/share/samba/panic-action %d workgroup = ITIA server string = Itia file server announce as = NT interfaces = 147.102.160.1 volume = %h passdb backend = ldapsam:ldap://ldap.itia.ntua.gr:389 ldap admin dn = uid=samba,ou=daemons,dc=itia,dc=ntua,dc=gr ldap ssl = off ldap suffix = dc=itia,dc=ntua,dc=gr ldap user suffix = ou=people ldap group suffix = ou=groups ldap machine suffix = ou=computers unix password sync = no add machine script = smbldap-useradd -w -i %u log file = /var/log/samba/samba-log.all log level = 3 max log size = 5000 syslog = 2 socket options = SO_KEEPALIVE TCP_NODELAY encrypt passwords = true password level = 1 security = user domain master = yes local master = no wins support = yes domain logons = yes idmap gid = 1000-2000 Update 2: The server has a single network interface eth1 (also an unused eth0 that shows up only in the kernel boot messages) and two ip addresses; the main, 147.102.160.1, and an additional one, 147.102.160.37, that comes up with "ip addr add 147.102.160.37/32 dev eth1" (used only for a web site that has a different certificate than other web sites served from the same machine). One of the problems I recently faced was that samba was using the latter IP address. I fixed that by adding the "interfaces = 147.102.160.1" statement in smb.conf. Now: acheloos:/etc/apache2# tcpdump host 147.102.160.40 and not port 5900 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 13:13:56.549048 IP lithaios.itia.civil.ntua.gr.netbios-dgm > 147.102.160.255.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549056 ARP, Request who-has acheloos2.itia.civil.ntua.gr tell lithaios.itia.civil.ntua.gr, length 46 13:13:56.549091 ARP, Reply acheloos2.itia.civil.ntua.gr is-at 00:10:4b:b4:9e:59 (oui Unknown), length 28 13:13:56.549324 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549608 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos2.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549741 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.550364 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.550468 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) (acheloos2 is the second IP address, 147.102.160.37). The above dump occurs when I click "OK" (to join the domain), until it asks me for the username and password of a user that can join the domain. I don't know why the client is contacting the second IP address. I tried temporarily deactivating it, but I still had some related ARP traffic (though I think not IP traffic).

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by skriatok
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start Active ruleset: bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 38 packets, 2228 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 844 542K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1158 111K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Active ruleset: (after editing iptables into below sugested form) bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 2567 packets, 172K bytes) pkts bytes target prot opt in out source destination 49 4157 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 412K 441M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2567 172K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 312K packets, 25M bytes) pkts bytes target prot opt in out source destination ping and syslog simultaneous screenshots from phone (pinger) and from laptop (being pinged) http://dl.dropbox.com/u/4160051/slckwr/pingfrom%20mobile.jpg http://dl.dropbox.com/u/4160051/slckwr/tailsyslog.jpg

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • Parse JSON in C#

    - by Ender
    I'm trying to parse some JSON data from the Google AJAX Search API. I have this URL and I'd like to break it down so that the results are displayed. I've currently written this code, but I'm pretty lost in regards of what to do next, although there are a number of examples out there with simplified JSON strings. Being new to C# and .NET in general I've struggled to get a genuine text output for my ASP.NET page so I've been recommended to give JSON.NET a try. Could anyone point me in the right direction to just simply writing some code that'll take in JSON from the Google AJAX Search API and print it out to the screen? EDIT: I think I've made some progress in regards to getting some code working using DataContractJsonSerializer. Here is the code I have so far. Any advice on whether this is close to working and/or how I would output my results in a clean format? EDIT 2: I've followed the advice from Dreas Grech and the StackOverflowException has gone. However, now I am getting no output. Any ideas on where to go next? EDIT 3: ALL FIXED! All results are working fine. Thank you again Dreas Grech! using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.ServiceModel.Web; using System.Runtime.Serialization; using System.Runtime.Serialization.Json; using System.IO; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { GoogleSearchResults g1 = new GoogleSearchResults(); const string json = @"{""responseData"": {""results"":[{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.cheese.com/"",""url"":""http://www.cheese.com/"",""visibleUrl"":""www.cheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:bkg1gwNt8u4J:www.cheese.com"",""title"":""\u003cb\u003eCHEESE\u003c/b\u003e.COM - All about \u003cb\u003echeese\u003c/b\u003e!."",""titleNoFormatting"":""CHEESE.COM - All about cheese!."",""content"":""\u003cb\u003eCheese\u003c/b\u003e - everything you want to know about it. Search \u003cb\u003echeese\u003c/b\u003e by name, by types of milk, by textures and by countries.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://en.wikipedia.org/wiki/Cheese"",""url"":""http://en.wikipedia.org/wiki/Cheese"",""visibleUrl"":""en.wikipedia.org"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:n9icdgMlCXIJ:en.wikipedia.org"",""title"":""\u003cb\u003eCheese\u003c/b\u003e - Wikipedia, the free encyclopedia"",""titleNoFormatting"":""Cheese - Wikipedia, the free encyclopedia"",""content"":""\u003cb\u003eCheese\u003c/b\u003e is a food consisting of proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. It is produced by coagulation of the milk \u003cb\u003e...\u003c/b\u003e""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.ilovecheese.com/"",""url"":""http://www.ilovecheese.com/"",""visibleUrl"":""www.ilovecheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:GBhRR8ytMhQJ:www.ilovecheese.com"",""title"":""I Love \u003cb\u003eCheese\u003c/b\u003e!, Homepage"",""titleNoFormatting"":""I Love Cheese!, Homepage"",""content"":""The American Dairy Association\u0026#39;s official site includes recipes and information on nutrition and storage of \u003cb\u003echeese\u003c/b\u003e.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.gnome.org/projects/cheese/"",""url"":""http://www.g

    Read the article

  • Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?

    - by Jonathan Leffler
    Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif If the command were run with '-DNAME1', the output would be: #define ALBUQUERQUE "ambidextrous" If the command were run with '-UNAME1', the output would be: #define PHANTASMAGORIA "ghostly" If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ I'd like to run the command with -DUSE_VOID -UPLATFORM1 and get the output: #undef VOID typedef void VOID; typedef void * VOIDPTR; Another example: #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ Ideally, I'd like to run with -UOLDUNIX and get the output: #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read #include'd files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) Status Report (one year later) After a year of use, I am very happy with 'sunifdef' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) and run with '-UC' (C is never defined), the output is: #if defined(A) && defined(B) || defined(D) && defined(E) This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: #if (defined(A) && defined(B)) || (defined(D) && defined(E)) However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called Coan that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.

    Read the article

  • no namenode error in pseudo-mode

    - by Anshu Basia
    I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried http://localhost/50070....it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost/50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up. Below is the error log: ************************************************************/ 2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.203.0 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 ************************************************************/ 2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times **2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist. 2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/ Any help is appreciated Thank-you

    Read the article

  • Change TextView without completely re-drawing layout?

    - by twk
    I've found that updating a text view every second in my app burns a lot of CPU. The textview is in a horizontal LinearLayout, which is in turn inside of a vertical LinearLayout. Switching to a RelativeLayout (as recommended to increase perf) is not an option right now (I tried to get that working originally, but it was too complicated). The horizontal LinearLayout has 3 elements. The outer ones are TextViews with a layout_weight of 0, and the middle one is a progress bar with a layout_weight of 1 to make it expand to take up most of the space. I'm changing the contents of the leftmost TextView every second So, is there a way to change the contents of the text view without re-drawing everything? Or, can I force the TextViews to use a fixed amount of space to simplify the layout. Other tips for speeding up a LinearLayout are greatly appreciated as well. For reference, here is my entire layout. The field I'm updating is the timeIn one. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content"> <TextView android:text="Artist Name" android:id="@+id/curArtist" android:textSize="8pt" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal" android:paddingTop="5dp"></TextView> <TextView android:text="Song Name" android:id="@+id/curSong" android:textSize="10pt" android:textStyle="bold" android:layout_below="@id/curArtist" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal"></TextView> <TextView android:text="Album Name" android:id="@+id/curAlbum" android:textSize="8pt" android:layout_below="@id/curSong" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal"></TextView> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_below="@id/curAlbum" android:orientation="vertical"> <LinearLayout android:id="@+id/seekWrapper" android:layout_width="fill_parent" android:layout_height="wrap_content" android:minHeight="10dp" android:maxHeight="10dp" android:orientation="horizontal"> <TextView android:text="0:00" android:id="@+id/timeIn" android:textSize="4pt" android:paddingLeft="10dp" android:gravity="center_vertical" android:layout_weight="0" android:layout_gravity="left|center_vertical" android:layout_width="wrap_content" android:layout_height="fill_parent"></TextView> <ProgressBar android:layout_below="@id/curAlbum" android:id="@+id/progressBar" android:paddingLeft="7dp" android:paddingRight="7dp" android:layout_width="fill_parent" android:layout_height="fill_parent" android:maxHeight="10dp" android:minHeight="10dp" android:indeterminate="false" android:layout_weight="1" android:layout_gravity="center_horizontal|center_vertical" style="?android:attr/progressBarStyleHorizontal"></ProgressBar> <TextView android:text="0:00" android:id="@+id/timeLeft" android:paddingRight="10dp" android:textSize="4pt" android:layout_gravity="right|center_vertical" android:layout_weight="0" android:layout_width="wrap_content" android:layout_height="fill_parent"></TextView> </LinearLayout> <ImageView android:id="@+id/albumArt" android:layout_weight="1" android:padding="5dp" android:layout_width="fill_parent" android:layout_height="fill_parent" android:src="@drawable/blank_album_art"></ImageView> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" > <ImageButton android:id="@+id/prev" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="left" android:src="@drawable/button_prev" android:paddingLeft="10dp" android:background="@null"></ImageButton> <ImageButton android:id="@+id/playPause" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:src="@drawable/button_play" android:layout_weight="1" android:background="@null"></ImageButton> <ImageButton android:id="@+id/next" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/button_next" android:layout_gravity="right" android:paddingRight="10dp" android:background="@null"></ImageButton> </LinearLayout> </LinearLayout> </RelativeLayout>

    Read the article

  • Multiple "pages" in GWT with human friendly URLs

    - by Andreas Borglin
    Hi. I'm playing with a GWT/GAE project which will have three different "pages", although it is not really pages in a GWT sense. The top views (one for each page) will have completely different layouts, but some of the widgets will be shared. One of the pages is the main page which is loaded by the default url (http://www.site.com), but the other two needs additional URL information to differentiate the page type. They also need a name parameter, (like http://www.site.com/project/project-name. There are at least two solutions to this that I'm aware of. Use GWT history mechanism and let page type and parameters (such as project name) be part of the history token. Use servlets with url-mapping patterns (like /project/*) The first choice might seem obvious at first, but it has several drawbacks. First, a user should be able to easily remember and type URL directly to a project. It is hard to produce a human friendly URL with history tokens. Second, I'm using gwt-presenter and this approach would mean that we need to support subplaces in one token, which I'd rather avoid. Third, a user will typically stay at one page, so it makes more sense that the page information is part of the "static" URL. Using servlets solves all these problems, but also creates other ones. So my first questions is, what is the best solution here? If I would go for the servlet solution, new questions pop up. It might make sense to split the GWT app into three separate modules, each with an entry point. Each servlet that is mapped to a certain page would then simply forward the request to the GWT module that handles that page. Since a user typically stays at one page, the browser only needs to load the js for that page. Based on what I've read, this solution is not really recommended. I could also stick with one module, but then GWT needs to find out which page it should display. It could either query the server or parse the URL itself. If I stick with one GWT module, I need to keep the page information stored on server side. Naturally I thought about sessions, but I'm not sure if its a good idea to mix page information with user data. A session usually lives between user login and logout, but in this case it would need different behavior. Would it be bad practise to handle this via sessions? The one GWT module + servlet solution also leads to another problem. If a user goes from a project page to the main page, how will GWT know that this has happened? The app will not be reloaded, so it will be treated as a simple state change. It seems rather ineffecient to have to check page info for every state change. Anyone care to guide me out of the foggy darkness that surrounds me? :-)

    Read the article

  • Extending XHTML

    - by Daniel Schaffer
    I'm playing around with writing a jQuery plugin that uses an attribute to define form validation behavior (yes, I'm aware there's already a validation plugin; this is as much a learning exercise as something I'll be using). Ideally, I'd like to have something like this: Example 1 - input: <input id="name" type="text" v:onvalidate="return this.value.length > 0;" /> Example 2 - wrapper: <div v:onvalidate="return $(this).find('[value]').length > 0;"> <input id="field1" type="text" /> <input id="field2" type="text" /> <input id="field3" type="text" /> </div> Example 3 - predefined: <input id="name" type="text" v:validation="not empty" /> The goal here is to allow my jQuery code to figure out which elements need to be validated (this is already done) and still have the markup be valid XHTML, which is what I'm having a problem with. I'm fairly sure this will require a combination of both DTD and XML Schema, but I'm not really quite sure how exactly to execute. Based on this article, I've created the following DTD: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; <!ENTITY % Inlspecial.extra "%div.qname; " > <!ENTITY % xhmtl-model.mod SYSTEM "formvalidation-model-1.mod" > <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; And here is "formvalidation-model-1": <!ATTLIST %div.qname; %onvalidation CDATA #IMPLIED %XHTML1-formvalidation1.xmlns.extra.attrib; > I've never done DTD before, so I'm not even really exactly sure what I'm doing. When I run my page through the W3 XHTML validator, I get 80+ errors because it's getting duplicate definitions of all the XHTML elements. Am I at least on the right track? Any suggestions? EDIT: I removed this section from my custom DTD, because it turned out that it was actually self-referencing, and the code I got the template from was really for combining two DTDs into one, not appending specific items to one: <!ENTITY % XHTML1-formvalidation1 PUBLIC "-//W3C//DTD XHTML 1.1 +FormValidation 1.0//EN" "http://new.dandoes.net/DTD/FormValidation1.dtd" > %XHTML1-formvalidation1; I also removed this, because it wasn't validating, and didn't seem to be doing anything: <!ENTITY % Inlspecial.extra "%div.qname; " > Additionally, I decided that since I'm only adding a handful of additional items, the separate files model recommended by W3 doesn't really seem that helpful, so I've put everything into the dtd file, the content of which is now this: <!ATTLIST div onvalidate CDATA #IMPLIED> <!ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" > %xhtml11.dtd; So now, I'm not getting any DTD-related validation errors, but the onvalidate attribute still is not valid. Update: I've ditched the DTD and added a schema: http://schema.dandoes.net/FormValidation/1.0.xsd Using v:onvalidate appears to validate in Visual Studio, but the W3C service still doesn't like it. Here's a page where I'm using it so you can look at the source: http://new.dandoes.net/auth And here's the link to the w3c validation result: http://validator.w3.org/check?uri=http://new.dandoes.net/auth&charset=(detect+automatically)&doctype=Inline&group=0 Is this about as close as I'll be able to get with this, or am I still doing something wrong?

    Read the article

  • Real-time graphing in Java

    - by thodinc
    I have an application which updates a variable about between 5 to 50 times a second and I am looking for some way of drawing a continuous XY plot of this change in real-time. Though JFreeChart is not recommended for such a high update rate, many users still say that it works for them. I've tried using this demo and modified it to display a random variable, but it seems to use up 100% CPU usage all the time. Even if I ignore that, I do not want to be restricted to JFreeChart's ui class for constructing forms (though I'm not sure what its capabilities are exactly). Would it be possible to integrate it with Java's "forms" and drop-down menus? (as are available in VB) Otherwise, are there any alternatives I could look into? EDIT: I'm new to Swing, so I've put together a code just to test the functionality of JFreeChart with it (while avoiding the use of the ApplicationFrame class of JFree since I'm not sure how that will work with Swing's combo boxes and buttons). Right now, the graph is being updated immediately and CPU usage is high. Would it be possible to buffer the value with new Millisecond() and update it maybe twice a second? Also, can I add other components to the rest of the JFrame without disrupting JFreeChart? How would I do that? frame.getContentPane().add(new Button("Click")) seems to overwrite the graph. package graphtest; import java.util.Random; import javax.swing.JFrame; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartPanel; import org.jfree.chart.JFreeChart; import org.jfree.chart.axis.ValueAxis; import org.jfree.chart.plot.XYPlot; import org.jfree.data.time.Millisecond; import org.jfree.data.time.TimeSeries; import org.jfree.data.time.TimeSeriesCollection; public class Main { static TimeSeries ts = new TimeSeries("data", Millisecond.class); public static void main(String[] args) throws InterruptedException { gen myGen = new gen(); new Thread(myGen).start(); TimeSeriesCollection dataset = new TimeSeriesCollection(ts); JFreeChart chart = ChartFactory.createTimeSeriesChart( "GraphTest", "Time", "Value", dataset, true, true, false ); final XYPlot plot = chart.getXYPlot(); ValueAxis axis = plot.getDomainAxis(); axis.setAutoRange(true); axis.setFixedAutoRange(60000.0); JFrame frame = new JFrame("GraphTest"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); ChartPanel label = new ChartPanel(chart); frame.getContentPane().add(label); //Suppose I add combo boxes and buttons here later frame.pack(); frame.setVisible(true); } static class gen implements Runnable { private Random randGen = new Random(); public void run() { while(true) { int num = randGen.nextInt(1000); System.out.println(num); ts.addOrUpdate(new Millisecond(), num); try { Thread.sleep(20); } catch (InterruptedException ex) { System.out.println(ex); } } } } }

    Read the article

  • View Generated Source (After AJAX/JavaScript) in C#

    - by Michael La Voie
    Is there a way to view the generated source of a web page (the code after all AJAX calls and JavaScript DOM manipulations have taken place) from a C# application without opening up a browser from the code? Viewing the initial page using a WebRequest or WebClient object works ok, but if the page makes extensive use of JavaScript to alter the DOM on page load, then these don't provide an accurate picture of the page. I have tried using Selenium and Watin UI testing frameworks and they work perfectly, supplying the generated source as it appears after all JavaScript manipulations are completed. Unfortunately, they do this by opening up an actual web browser, which is very slow. I've implemented a selenium server which offloads this work to another machine, but there is still a substantial delay. Is there a .Net library that will load and parse a page (like a browser) and spit out the generated code? Clearly, Google and Yahoo aren't opening up browsers for every page they want to spider (of course they may have more resources than me...). Is there such a library or am I out of luck unless I'm willing to dissect the source code of an open source browser? SOLUTION Well, thank you everyone for you're help. I have a working solution that is about 10X faster then Selenium. Woo! Thanks to this old article from beansoftware I was able to use the System.Windows.Forms.WebBrwoswer control to download the page and parse it, then give em the generated source. Even though the control is in Windows.Forms, you can still run it from Asp.Net (which is what I'm doing), just remember to add System.Window.Forms to your project references. There are two notable things about the code. First, the WebBrowser control is called in a new thread. This is because it must run on a single threaded apartment. Second, the GeneratedSource variable is set in two places. This is not due to an intelligent design decision :) I'm still working on it and will update this answer when I'm done. wb_DocumentCompleted() is called multiple times. First when the initial HTML is downloaded, then again when the first round of JavaScript completes. Unfortunately, the site I'm scraping has 3 different loading stages. 1) Load initial HTML 2) Do first round of JavaScript DOM manipulation 3) pause for half a second then do a second round of JS DOM manipulation. For some reason, the second round isn't cause by the wb_DocumentCompleted() function, but it is always caught when wb.ReadyState == Complete. So why not remove it from wb_DocumentCompleted()? I'm still not sure why it isn't caught there and that's where the beadsoftware article recommended putting it. I'm going to keep looking into it. I just wanted to publish this code so anyone who's interested can use it. Enjoy! using System.Threading; using System.Windows.Forms; public class WebProcessor { private string GeneratedSource{ get; set; } private string URL { get; set; } public string GetGeneratedHTML(string url) { URL = url; Thread t = new Thread(new ThreadStart(WebBrowserThread)); t.SetApartmentState(ApartmentState.STA); t.Start(); t.Join(); return GeneratedSource; } private void WebBrowserThread() { WebBrowser wb = new WebBrowser(); wb.Navigate(URL); wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler( wb_DocumentCompleted); while (wb.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); //Added this line, because the final HTML takes a while to show up GeneratedSource= wb.Document.Body.InnerHtml; wb.Dispose(); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = (WebBrowser)sender; GeneratedSource= wb.Document.Body.InnerHtml; } }

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >