Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 368/1602 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • What is start_daemon?

    - by David Parks
    I'm trying to understand start_daemon in the following /etc/init.d/nagios-nrpe-server startup script: start) if [ "$INETD" = 1 ]; then exit 1 fi log_daemon_msg "Starting $DESC" "$NAME" start_daemon -p $PIDDIR/nrpe.pid $NICENESS $DAEMON -c $CONFIG -d $DAEMON_OPTS log_end_msg $? ;; In particular, when I start this service it isn't writing a PID file as expected, thus the stop service nagios-nrpe-server command is not working (I need to manually kill the processes). I'm trying to figure out how to trouble shoot the problem, but I can't run start_daemon ... from the command line. I want to reproduce what the script is doing manually so I can work on what the problem is.

    Read the article

  • Using the Windows Explorer Context Menu to reset Umbraco Directory Permissions

    - by Vizioz Limited
    Hi All,As Umbraco matures I am assuming that needing to reset directory permissions might well become a thing of the past, but at the moment it is still something when I copy sites between machines that I often find myself doing.As it's 4:30am I thought, there must be a better way than having to open up a DOS prompt, navigate to a directory and then run a batch file passing in the IIS root folder location.Well.. there is :)I googled for adding a command to the context menu within Windows Explorer, I found a way of doing this for XP, but it seems the functionality was removed from Windows 7, however I found a very neat freeware application called File Menu Tools which does work perfectly!I have now added a command to my context menu that enables me to right click an IIS site root folder and then call my batch script and automatically pass in the directory.This will save me a bunch of time :)

    Read the article

  • Issue installing FLEXnet on ubuntu for program: Geneious

    - by jon_shep
    Afternoon, I can successful in my Geneious Pro software but when I am required to have FLEXnet installed for the licensing process. The prompt I am given is : To install FLEXnet on Linux, run the following command from your shell as root: /home/shep/Geneious/licensing_service/install_fnp.sh "/home/shep/Geneious/licensing_service/linux64/FNPLicensingService" When you have done this, you can activate your license in Geneious. As Root: root@Jon:/home/shep/Geneious/licensing_service# sh install_fnp.sh Unable to locate anchor service to install, please specify correctly on command line also root@Jon:/home/shep/Geneious/licensing_service/linux64# sh FNPLicensingService FNPLicensingService: 2: FNPLicensingService: Syntax error: Unterminated quoted string Anyone have further ideas? I tried to find the software online directly, that was no good either. ~Jon

    Read the article

  • Adobe flash player not working with Amazon Prime

    - by Alex
    I have ubuntu 12.04 64bit using Google Chrome. I had chromium from the app center then today amazon prime video stopped working. It told me to update Flash. So I uninstalled chromium and installed Google Chrome. Didn't work. Then I downloaded flash for ubuntu via apt. That one gave me a "flash version isn't supported" message. The flash version was 11. Now I tried http://apt.ubuntu.com/p/flashplugin-installer It worked, but right in the middle of the video, it popped the error message again. Sorry we are unable to stream this video. This is likely because your Flash Player needs to be updated. So I don't know what happened.

    Read the article

  • Integrating OSB - B2B for a healthcare scenario

    - by Ramesh Nittur
    Usecase 1: Admin to send a HL7 Message to Pharmacy. OSB to use B2B for translating the XML document to HL7 native document using the translation webservice exposed by B2B. B2B configuration Oracle B2B 11g PS2 release has exposed a webservices to translate XML document to Native document. This service needs an outbound agreement configured with "HL7 Message Facility ID" as the Identifier. Document Type and revision can be identified from the document itself. B2B translation webservice can be used in two mode, one for only translation and another for translation and routing. OSB-B2B Integration sample are developed based on the "b2b-005-hl7" sample in OTN. We are not going to discuss about the b2b metadata configuration creation details, as it is dealt detail in OTN sample document. OSB Configuration Steps to create OSB Configuration sample: Create a OSB Project with name OSB-B2B Create BusinessService with name B2BBusinessService to consume B2B TranslateService URL http://<host:8001>/b2b/services/ TranslateService

    Read the article

  • Ctrl + C doesn't abort programs in terminal

    - by jake
    I changed the keyboard shortcut in terminal so that Ctrl + C would copy text. I realized I can't abort a program I am running since Ctrl + C used to be the abort command. I know that Ctrl + Shift + C works but want it switched back. Is there a way to revert the keyboard shortcuts to the real defaults before I decided to mess with it? What is the abort command defined as in keyboard shortcuts? Not a big program if I can't but it would be nice to know.

    Read the article

  • C#/.NET Little Wonders: The EventHandler and EventHandler&lt;TEventArgs&gt; delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last two weeks, we examined the Action family of delegates (and delegates in general), and the Func family of delegates and how they can be used to support generic, reusable algorithms and classes. So this week, we are going to look at a handy pair of delegates that can be used to eliminate the need for defining custom delegates when creating events: the EventHandler and EventHandler<TEventArgs> delegates. Events and delegates Before we begin, let’s quickly consider events in .NET.  According to the MSDN: An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. So, basically, you can create an event in a type so that users of that type can subscribe to notifications of things of interest.  How is this different than some of the delegate programming that we talked about in the last two weeks?  Well, you can think of an event as a special access modifier on a delegate.  Some differences between the two are: Events are a special access case of delegates They behave much like delegates instances inside the type they are declared in, but outside of that type they can only be (un)subscribed to. Events can specify add/remove behavior explicitly If you want to do additional work when someone subscribes or unsubscribes to an event, you can specify the add and remove actions explicitly. Events have access modifiers, but these only specify the access level of those who can (un)subscribe A public event, for example, means anyone can (un)subscribe, but it does not mean that anyone can raise (invoke) the event directly.  Events can only be raised by the type that contains them In contrast, if a delegate is visible, it can be invoked outside of the object (not even in a sub-class!). Events tend to be for notifications only, and should be treated as optional Semantically speaking, events typically don’t perform work on the the class directly, but tend to just notify subscribers when something of note occurs. My basic rule-of-thumb is that if you are just wanting to notify any listeners (who may or may not care) that something has happened, use an event.  However, if you want the caller to provide some function to perform to direct the class about how it should perform work, make it a delegate. Declaring events using custom delegates To declare an event in a type, we simply use the event keyword and specify its delegate type.  For example, let’s say you wanted to create a new TimeOfDayTimer that triggers at a given time of the day (as opposed to on an interval).  We could write something like this: 1: public delegate void TimeOfDayHandler(object source, ElapsedEventArgs e); 2:  3: // A timer that will fire at time of day each day. 4: public class TimeOfDayTimer : IDisposable 5: { 6: // Event that is triggered at time of day. 7: public event TimeOfDayHandler Elapsed; 8:  9: // ... 10: } The first thing to note is that the event is a delegate type, which tells us what types of methods may subscribe to it.  The second thing to note is the signature of the event handler delegate, according to the MSDN: The standard signature of an event handler delegate defines a method that does not return a value, whose first parameter is of type Object and refers to the instance that raises the event, and whose second parameter is derived from type EventArgs and holds the event data. If the event does not generate event data, the second parameter is simply an instance of EventArgs. Otherwise, the second parameter is a custom type derived from EventArgs and supplies any fields or properties needed to hold the event data. So, in a nutshell, the event handler delegates should return void and take two parameters: An object reference to the object that raised the event. An EventArgs (or a subclass of EventArgs) reference to event specific information. Even if your event has no additional information to provide, you are still expected to provide an EventArgs instance.  In this case, feel free to pass the EventArgs.Empty singleton instead of creating new instances of EventArgs (to avoid generating unneeded memory garbage). The EventHandler delegate Because many events have no additional information to pass, and thus do not require custom EventArgs, the signature of the delegates for subscribing to these events is typically: 1: // always takes an object and an EventArgs reference 2: public delegate void EventHandler(object sender, EventArgs e) It would be insane to recreate this delegate for every class that had a basic event with no additional event data, so there already exists a delegate for you called EventHandler that has this very definition!  Feel free to use it to define any events which supply no additional event information: 1: public class Cache 2: { 3: // event that is raised whenever the cache performs a cleanup 4: public event EventHandler OnCleanup; 5:  6: // ... 7: } This will handle any event with the standard EventArgs (no additional information).  But what of events that do need to supply additional information?  Does that mean we’re out of luck for subclasses of EventArgs?  That’s where the generic for of EventHandler comes into play… The generic EventHandler<TEventArgs> delegate Starting with the introduction of generics in .NET 2.0, we have a generic delegate called EventHandler<TEventArgs>.  Its signature is as follows: 1: public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e) 2: where TEventArgs : EventArgs This is similar to EventHandler except it has been made generic to support the more general case.  Thus, it will work for any delegate where the first argument is an object (the sender) and the second argument is a class derived from EventArgs (the event data). For example, let’s say we wanted to create a message receiver, and we wanted it to have a few events such as OnConnected that will tell us when a connection is established (probably with no additional information) and OnMessageReceived that will tell us when a new message arrives (probably with a string for the new message text). So for OnMessageReceived, our MessageReceivedEventArgs might look like this: 1: public sealed class MessageReceivedEventArgs : EventArgs 2: { 3: public string Message { get; set; } 4: } And since OnConnected needs no event argument type defined, our class might look something like this: 1: public class MessageReceiver 2: { 3: // event that is called when the receiver connects with sender 4: public event EventHandler OnConnected; 5:  6: // event that is called when a new message is received. 7: public event EventHandler<MessageReceivedEventArgs> OnMessageReceived; 8:  9: // ... 10: } Notice, nowhere did we have to define a delegate to fit our event definition, the EventHandler and generic EventHandler<TEventArgs> delegates fit almost anything we’d need to do with events. Sidebar: Thread-safety and raising an event When the time comes to raise an event, we should always check to make sure there are subscribers, and then only raise the event if anyone is subscribed.  This is important because if no one is subscribed to the event, then the instance will be null and we will get a NullReferenceException if we attempt to raise the event. 1: // This protects against NullReferenceException... or does it? 2: if (OnMessageReceived != null) 3: { 4: OnMessageReceived(this, new MessageReceivedEventArgs(aMessage)); 5: } The above code seems to handle the null reference if no one is subscribed, but there’s a problem if this is being used in multi-threaded environments.  For example, assume we have thread A which is about to raise the event, and it checks and clears the null check and is about to raise the event.  However, before it can do that thread B unsubscribes to the event, which sets the delegate to null.  Now, when thread A attempts to raise the event, this causes the NullReferenceException that we were hoping to avoid! To counter this, the simplest best-practice method is to copy the event (just a multicast delegate) to a temporary local variable just before we raise it.  Since we are inside the class where this event is being raised, we can copy it to a local variable like this, and it will protect us from multi-threading since multicast delegates are immutable and assignments are atomic: 1: // always make copy of the event multi-cast delegate before checking 2: // for null to avoid race-condition between the null-check and raising it. 3: var handler = OnMessageReceived; 4: 5: if (handler != null) 6: { 7: handler(this, new MessageReceivedEventArgs(aMessage)); 8: } The very slight trade-off is that it’s possible a class may get an event after it unsubscribes in a multi-threaded environment, but this is a small risk and classes should be prepared for this possibility anyway.  For a more detailed discussion on this, check out this excellent Eric Lippert blog post on Events and Races. Summary Generic delegates give us a lot of power to make generic algorithms and classes, and the EventHandler delegate family gives us the flexibility to create events easily, without needing to redefine delegates over and over.  Use them whenever you need to define events with or without specialized EventArgs.   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Delegates, EventHandler

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Google Now is One Step Closer to Becoming Active in Google Chrome

    - by Akemi Iwaya
    Many people have been eager to have Google Now working in their Chrome browsers and this week that dream got one step closer to reality. The first teasers that the new feature is becoming active have started to appear, so now is a good time to activate the switch for it and be ready for its arrival. You will need to be running the Dev Channel on your computer and enable the Google Now switch via Chrome Flags (chrome://flags/) if you have not already done so. The switch will be towards the bottom of the list. Once that is done restart your browser. After the browser has restarted you will see a notification window pop up as seen in the first screenshot above. Click Yes and a second small pop up message window will appear letting you know more about the freshly enabled feature. Unfortunately we were not able to catch a screenshot of the second message window before it disappeared.    

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • aspnet_regiis -lk is not listing the site I need

    - by Luke Duddridge
    I am trying to release a site to run under framework 4 on a server that also hosts framework 2 sites. By default the App has defaulted to framework 2, but when I try to change it's framework to 4 I get a message saying that the following action will cause the iis to reset. The problem I have is there are serveral active sites that I do not want to interupt with a restart. The message goes on to say you can avoid restarting by running the following: aspnet_regiis -norestart -s [IIS Virtual Path] I have been attempting to find the site virtual path but when I run aspnet_regiis -lk the site I am after does not appear to be listed. My first thoughts were that it has something to do with the app pool?, but I'm sure I saw sites that are inactive listed, and after creating a basic site to get it to run under framework 2, the site still did not appear in the -lk list. Can anyone tell me if there is an alternative location to the -lk that I can find the specific information realating to the IIS Virtual Path?

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • How to make Connect Communications VPN connection in 10.10?

    - by Bilal Mohammad Qazi
    these steps were send by my iSP admin for ver10.10 and i'm using 11.10... step 1 sucessfully implemented till point 7 after that the problems are marked after '//' Step 2 i cannot completely do the step 2 How to make Connect Communications VPN connection in Ubuntu 10.10. 1st Step:- 1- Go to System > Administration > Synaptic Package Manage 2- Search for “PPTP”, check “network-manager-PPTP” and click “Apply” 3- Click on the Network Manager tray icon with your right mouse button and choose “Edit Connections…”. 4- Go to the “VPN” tab and click “Add”. 5- Choose “Point-to-Point Tunneling Protocol (PPTP)” as the VPN Connection Type 6- Check the VPN Connection Type and click “Create”. 7- Give your VPN connection a name and assign all the necessary information • Gateway = blue.connect.net.pk if you got Blue Package or • Gateway = green.connect.net.pk if you got Green Package or • Gateway = blueplus.connect.net.pk if you got BluePlus Package or • Gateway = red.connect.net.pk if you got Red Package • User name = Connect Communications Userid • Password = Connect Communications Password 8- Now Click on “Advanced” Authentication • Unchecked “PAP" // cannot uncheck • Unchecked “MSCHAP" // cannot uncheck • Unchecked “CHAP" • Checked only “MSCHAPv2" EAP shown in ver11.10 and cannot be unchecked Security And Compression. • Unchecked “Use Point-to-Point encryption (MPPE)”. • Unchecked “Allow statefull encryption”. • Unchecked “Allow BSD data Compression”. • Unchecked “Allow Deflate data Compression”. • Unchecked “Use TCP Header Compression”. • Unchecked “Send PPP echo Packets” Then Press “OK” then “Apply”. 9-Now you are able to connect to the specified VPN connection via the Networking Manager Then you can connect to VPN in the menu bar and your Internet icon will have a lock when the connection is successful. 2nd Step:- Open Terminal window. First, you open a terminal (Applications > Accessories > Terminal): Run command “sudo” Now gave root Password. Then run command “netstat -r -n” It will show some lines and for example from the last line pick the IP from 2nd column like 10.111.0.1 0.0.0.0 10.111.0.1 0.0.0.0 UG 0 0 0 eth0 Now run the fallowing command. echo “route add -net 10.101.8.0 netmask 255.255.252.0 gw 10.152.24.1” > /etc/rc.local note :- 10.111.0.1 is an example IP now run “ sh /etc/rc.local “

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • How do I install MATLAB R2012a?

    - by Mehdi
    I have downloaded MATLAB R2012a for Unix platform and i want to install it on my ubuntu 11.10. To install i try this command: /<matlab_installation_file_directory>/install and it says: install: missing file operand According to it's manual i must give it an input file, So i create an input file like this to install in 'Stand Alone' mode: destinationFolder=usr/local/R2012a fileInstallationKey=xxxxx-xxxxx-xxxxx-xxxxx-xxxxx agreeToLicense=yes outputFile=/tmp/mathworks_usr.log mode=interactive activationPropertiesFile=home/.../lic_standalone.dat Acctually i'm not sure in "activationPropertiesFile" field what file is required, so i supposed it requires license file. I saved this file as txt format in the same directory which installation files are. Then i tried this command: install -inputFile my_input_file.txt and it gets this error: install: invalid option -- 'i' I know there is some helps in other websites and also some questions here about this topic, but i can't figure out what's the problem, Please help me, i'm a real noob on linux . Thank you guys

    Read the article

  • Hosting a website on Heroku.... I know how to, but im running into problems!

    - by Thomas Miller
    I'm starting to learn more on the back-end scale of programing. Recently I started up Heroku for the second or third time. This time I actually installed the Git update to my Mac and installed Heroku in the terminal. I wanted to upload a static html site with the Sinatra gem. Everything worked out fine inside the terminal, though I added Sinatra after I got everything working and the file with the site hooked up to Heroku. In my logs I did see that I was missing the Sinatra gem, so I installed it. My site contains both the proper app.rb and config.ru files. I have nothing showing up online. Just a blank screen! Contacting Heroku on this problem has been very difficult. I get a response every day, and on every day I respond with a question to the answer that didn't help me at all. 2011-05-18T00:25:20+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:20] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:20+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T00:25:26+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:26] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:26+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=5ms bytes=313 2011-05-17T18:25:51-07:00 heroku[web.1]: Idling 2011-05-17T18:26:01-07:00 heroku[web.1]: State changed from up to down 2011-05-18T01:26:01+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T01:26:01+00:00 app[web.1]: >> Stopping ... 2011-05-18T01:26:02+00:00 heroku[web.1]: Process exited 2011-05-17T20:12:46-07:00 heroku[web.1]: Unidling 2011-05-17T20:12:47-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T03:12:48+00:00 heroku[web.1]: Starting process with command: `thin -p 40055 -e production -R /home/heroku_rack/heroku.ru start` 2011-05-18T03:12:49+00:00 app[web.1]: >> Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T03:12:49+00:00 app[web.1]: >> Maximum connections set to 1024 2011-05-18T03:12:49+00:00 app[web.1]: >> Listening on 0.0.0.0:40055, CTRL+C to stop 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=9954ms service=6ms bytes=565 2011-05-18T03:12:50+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:50] "GET /style.css HTTP/1.1" 200 - 0.0012 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-17T20:12:50-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T03:12:51+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:51] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T03:12:51+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET / HTTP/1.1" 200 293 0.0011 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET / HTTP/1.1" 200 293 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=4ms bytes=565 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET /style.css HTTP/1.1" 200 - 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T03:57:08+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:08] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-17T21:58:27-07:00 heroku[web.1]: Idling 2011-05-18T04:58:30+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T04:58:30+00:00 app[web.1]: >> Stopping ... 2011-05-18T04:58:30+00:00 heroku[web.1]: Process exited 2011-05-17T21:58:33-07:00 heroku[web.1]: State changed from up to down 2011-05-17T23:11:58-07:00 heroku[web.1]: Unidling 2011-05-17T23:11:58-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T06:12:00+00:00 heroku[web.1]: Starting process with command: `thin -p 40091 -e production -R /home/heroku_rack/heroku.ru start` 2011-05-18T06:12:01+00:00 app[web.1]: >> Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T06:12:01+00:00 app[web.1]: >> Maximum connections set to 1024 2011-05-18T06:12:01+00:00 app[web.1]: >> Listening on 0.0.0.0:40091, CTRL+C to stop 2011-05-18T06:12:01+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:01] "GET / HTTP/1.1" 200 293 0.0017 2011-05-18T06:12:02+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=3209ms service=5ms bytes=565 2011-05-18T06:12:03+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:03] "GET /style.css HTTP/1.1" 200 - 0.0019 2011-05-17T23:12:08-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T00:13:13-07:00 heroku[web.1]: Idling 2011-05-18T00:13:16-07:00 heroku[web.1]: State changed from up to down 2011-05-18T07:13:16+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T07:13:16+00:00 app[web.1]: >> Stopping ... 2011-05-18T07:13:17+00:00 heroku[web.1]: Process exited 2011-05-18T01:54:21-07:00 heroku[web.1]: Unidling 2011-05-18T01:54:21-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T08:54:23+00:00 heroku[web.1]: Starting process with command: `thin -p 59491 -e production -R /home/heroku_rack/heroku.ru start` 2011-05-18T08:54:24+00:00 app[web.1]: >> Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T08:54:24+00:00 app[web.1]: >> Maximum connections set to 1024 2011-05-18T08:54:24+00:00 app[web.1]: >> Listening on 0.0.0.0:59491, CTRL+C to stop 2011-05-18T01:54:28-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=6943ms service=6ms bytes=565 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /style.css HTTP/1.1" 200 - 0.0014 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T02:55:23-07:00 heroku[web.1]: Idling 2011-05-18T02:55:33-07:00 heroku[web.1]: State changed from up to down 2011-05-18T09:55:34+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T09:55:34+00:00 app[web.1]: >> Stopping ... 2011-05-18T09:55:34+00:00 heroku[web.1]: Process exited 2011-05-18T07:23:10-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T14:23:12+00:00 heroku[web.1]: Starting process with command: `thin -p 20560 -e production -R /home/heroku_rack/heroku.ru start` 2011-05-18T14:23:13+00:00 app[web.1]: >> Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T14:23:13+00:00 app[web.1]: >> Maximum connections set to 1024 2011-05-18T14:23:13+00:00 app[web.1]: >> Listening on 0.0.0.0:20560, CTRL+C to stop 2011-05-18T07:23:13-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=7ms bytes=565 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /style.css HTTP/1.1" 200 - 0.0015 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /favicon.ico HTTP/1.1" 404 18 0.0009 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T08:24:03-07:00 heroku[web.1]: Idling 2011-05-18T08:24:07-07:00 heroku[web.1]: State changed from up to down 2011-05-18T15:24:07+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T15:24:07+00:00 app[web.1]: >> Stopping ... 2011-05-18T17:34:27-07:00 heroku[web.1]: Unidling 2011-05-18T17:34:28-07:00 heroku[web.1]: State changed from created to starting 2011-05-19T00:34:29+00:00 heroku[web.1]: Starting process with command: `thin -p 57621 -e production -R /home/heroku_rack/heroku.ru start` 2011-05-18T17:34:31-07:00 heroku[web.1]: State changed from starting to up 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET / HTTP/1.1" 200 293 0.0016 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET /style.css HTTP/1.1" 200 - 0.0011 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-19T00:34:34+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-19T00:34:34+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:34] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T18:35:48-07:00 heroku[web.1]: Idling 2011-05-18T18:35:51-07:00 heroku[web.1]: State changed from up to down

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • MYSQL – Identifying Current Version of MySQL Server Installation – Part 2

    - by Pinal Dave
    Earlier I wrote an article about Detecting Current Version of MySQL Server Installation. After the post quite a few emails I received where various users suggested that there are many more ways to figure out the version of MySQL. Here are few of the methods which I received in the email. Method 1: This method retrieves value with the help of Information Functions. SELECT VERSION(); Method 2: This method is very similar to SQL Server. SELECT @@Version Method 3: You can connect to MySQL with command prompt and type following command: STATUS; Method 4: Please refer my earlier blog post. SHOW VARIABLES LIKE "%version%"; Let me know if you know any more method and I will extend this blog post. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Doing Time Limited Flight Recordings Using Start Up Parameters

    - by Marcus Hirt
    Just like with the old JRockit Runtime Analyzer, it is possible to start up recordings using command line parameters to JRockit. The parameter is called -XX:StartFlightRecording in R28. Below is an example that starts a flight recording half a minute after the JVM has been started. The recording will last for a minute. The name when viewing the ongoing recordings will be MyRecording, and the resulting file will be written to C:\tmp\myrecording.jfr. The recording will use the settings in jre\lib\jfr\profile.jfs. -XX:StartFlightRecording=delay=30s,duration=60s,name=MyRecording,filename=C:\tmp\myrecording.jfr,settings=profile For more information, see the JRockit R28 command line parameter documentation.

    Read the article

  • How to install Eclipse in Ubuntu 12.04?

    - by Ant's
    I downloaded the Eclipse setup from their homepage. And I followed the instructions on this page. But I couldn't able to follow the last instruction, which ask me to do so : /opt/eclipse/eclipse -clean If do so, I get an error message like this : sudo: /opt/eclipse/eclipse: command not found But notably I can see the Eclipse Icon on my Dash Home (if I search for "Eclipse"). But clicking on that icon doesn't open the IDE. Where I'm making the mistake? And also running this command in terminal : eclipse throws this output: /usr/bin/eclipse: 5: /usr/bin/eclipse: /opt/eclipse/eclipse: Permission denied Thanks in advance.

    Read the article

  • How to copy a folder from /home/kevin to /opt

    - by lambda23
    I have a new computer installed with Ubuntu 12.04. Then I want to install wireless driver named compat-wireless-3.5-3. Before that, the driver folder to /home/kevin. I want to install it on /opt directory. Before install the driver, i want to copy the driver folder from /home/kevin to /opt. I try to use ordinary copy (Right Click Copy Paste), but the paste is blured. After that, i tried using this on terminal: sudo cp /home/kevin/compat-wireless-3.5-3 /opt But i get this command: cp: omitting directory `home/kevin/compat-wireless-3.5-3' What does the command mean? I can't copy the driver until now.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >