Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 312/2933 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • problem with PHP class function ()

    - by lusey
    hello this is my first question here and i hope you can help me .. I am trying to find a soloution of the towers of hanoi problem by three search ways (BFS-DFS-IDS) so I use "state" class whitch defined by 5 variables as here : class state { var $tower1 = array(); var $tower2 = array(); var $tower3 = array(); var $depth; var $neighbors = array(); and it also has many function one of them is getneighbors() which supposed to fill the array $neighbors with state neighbors and they are from the type "state" and here is the function : function getneighbors () { $temp=$this->copy(); $neighbor1= $this->copy(); $neighbor2= $this->copy(); $neighbor3= $this->copy(); $neighbor4= $this->copy(); $neighbor5= $this->copy(); $neighbor6= $this->copy(); if(!Empty($temp->tower1)) { if(!Empty($neighbor1->tower2)) { if(end($neighbor1->tower1) < end($neighbor1->tower2)) { array_unshift($neighbor1->tower2,array_pop($neighbor1->tower1)); array_push($neighbors,$neighbor1); }} else { array_unshift($neighbor1->tower2, array_pop($neighbor1->tower1)); array_push($neighbors,$neighbor1); } if(!Empty($neighbor2->tower3)) { if(end($neighbor2->tower1) < end($neighbor2->tower3)) { array_unshift($neighbor2->tower3, array_pop($neighbor2->tower1)); array_push($neighbors,$neighbor2); }} else { array_unshift($neighbor2->tower3,array_shift($neighbor2->tower1)); array_push($neighbors,$neighbor2); } } if(!Empty($temp->tower2)) { if(!Empty($neighbor3->tower1)) { if(end($neighbor3->tower2) < end($neighbor3->tower1)) { array_unshift($neighbor3->tower1,array_shift($neighbor3->tower2)); array_push($neighbors,$neighbor3); } } else { array_unshift($neighbor3->tower1,array_shift($neighbor3->tower2)); array_push($neighbors,$neighbor3); } if(!Empty($neighbor4->tower3)) { if(end($neighbor4->tower2) < end($neighbor4->tower3)) { array_unshift($neighbor4->tower1,array_shift($neighbor4->tower2)); array_push($neighbors,$neighbor4); } } else{ array_unshift($neighbor4->tower3,array_shift($neighbor4->tower2)); array_push($neighbors,$neighbor4); } } if(!Empty($temp->tower3)) { if(!Empty($neighbor5->tower1)) { if(end($neighbor5->tower3) < end($neighbor5->tower1)) {array_unshift($neighbor5->tower1,array_shift($neighbor5->tower3)); array_push($neighbors,$neighbor5); } } else{ array_unshift($neighbor5->tower1,array_shift($neighbor5->tower3)); array_push($neighbors,$neighbor5);} if(!Empty($neighbor6->tower2)) { if(end($neighbor6->tower3) < end($neighbor6->tower2)) { array_unshift($neighbor6->tower2,array_shift($neighbor6->tower3)); array_push($neighbors,$neighbor6); }} else{ array_unshift($neighbor6->tower2,array_shift($neighbor6->tower3)); array_push($neighbors,$neighbor6);} } return $neighbors; } note that toString and equals and copy are defined too now the problem is that when I call getneighbors() it returns an empty $neighbors array can you pleas tell me the problem ?

    Read the article

  • The Best Tools for Enhancing and Expanding the Features of the Windows Clipboard

    - by Lori Kaufman
    The Windows clipboard is like a scratch pad used by the operating system and all running applications. When you copy or cut some text or a graphic, it is temporarily stored in the clipboard and then retrieved later when you paste the data. We’ve previously showed you how to store multiple items to the clipboard (using Clipboard Manager) in Windows, how to copy a file path to the clipboard, how to create a shortcut to clear the clipboard, and how to copy a list of files to the clipboard. There are some limitations of the Windows clipboard. Only one item can be stored at a time. Each time you copy something, the current item in the clipboard is replaced. The data on the clipboard also cannot be viewed without pasting it into an application. In addition, the data on the clipboard is cleared when you log out of your Windows session. NOTE: The above image shows the clipboard viewer from Windows XP (clipbrd.exe), which is not available in Windows 7 or Vista. However, you can download the file from deviantART and run it to view the current entry in the clipboard in Windows 7. Here are some additional useful tools that help enhance or expand the features of the Windows clipboard and make it more useful. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • What do you think of this generator syntax?

    - by ChaosPandion
    I've been working on an ECMAScript dialect for quite some time now and have reached a point where I am comfortable adding new language features. I would love to hear some thoughts and suggestions on the syntax. Example generator { yield 1; yield 2; yield 3; if (true) { yield break; } yield continue generator { yield 4; yield 5; yield 6; }; } Syntax GeneratorExpression:     generator  {  GeneratorBody  } GeneratorBody:     GeneratorStatementsopt GeneratorStatements:     StatementListopt GeneratorStatement GeneratorStatementsopt GeneratorStatement:     YieldStatement     YieldBreakStatement     YieldContinueStatement YieldStatement:     yield  Expression  ; YieldBreakStatement:     yield  break  ; YieldContinueStatement:     yield  continue  Expression  ; Semantics The YieldBreakStatement allows you to end iteration early. This helps avoid deeply indented code. You'll be able to write something like this: generator { yield something1(); if (condition1 && condition2) yield break; yield something2(); if (condition3 && condition4) yield break; yield something3(); } instead of: generator { yield something1(); if (!condition1 && !condition2) { yield something2(); if (!condition3 && !condition4) { yield something3(); } } } The YieldContinueStatement allows you to combine generators: function generateNumbers(start) { return generator { yield 1 + start; yield 2 + start; yield 3 + start; if (start < 100) { yield continue generateNumbers(start + 1); } }; }

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • Bash script throws, "syntax error near unexpected token `}'" when ran

    - by Tab00
    I am trying to write a script to monitor some battery statuses on a laptop running as a server. To accomplish this, I have already started to write this code: #! /bin/bash # A script to monitor battery statuses and send out email notifications #take care of looping the script for (( ; ; )) do #First, we check to see if the battery is present... if(cat /proc/acpi/battery/BAT0/state | grep 'present: *' == present: yes) { #Code to execute if battery IS present #No script needed for our application #you may add scripts to run } else { #if the battery IS NOT present, run this code sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is either missing, or removed. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #Second, we check into the current state of the battery if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charging') { #Code to execute if battery is charging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is charging. This MIGHT mean that something just happened" -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging, is it discharging? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: discharging') { #Code to run if the battery is discharging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is discharging. This shouldn't be happening. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging or discharging, is it charged? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charged') { #Code to run if battery is charged } done I'm pretty sure that most of the other stuff works correctly, but I haven't been able to try it because it will not run. whenever I try and run the script, this is the error that I get: ./BatMon.sh: line 15: syntax error near unexpected token `}' ./BatMon.sh: ` }' is the error something super simple like a forgotten semicolon? Thanks -Tab00

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • apply non-hierarchial transforms to hierarchial skeleton?

    - by user975135
    I use Blender3D, but the answer might not API-exclusive. I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is. I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here: http://www.2shared.com/file/5qUjmnIs/sample_files.html Import the animation by selecting an Armature and running the importer on "sba" file. Do this for both Armatures. This is how I assign the poses in the real (complex) importer: matrix_bases = ... # matrix from file animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy() pose.bones[bonename].matrix = animation_matrix If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again. The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones. Final 4x4 matrix after constraints and drivers are applied (object space) I tried doing something like this: matrix_basis = ... # matrix from file animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted()) pose.bones[bonename].matrix = animation_matrix But it looks worse. Experimented with order of operations, no luck with all. For the record, in the old 2.4 API this worked like a charm: matrix_basis = ... # matrix from file animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis pose.bones[bonename].poseMatrix = animation_matrix pose.update() Link to Blender API ref: http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone

    Read the article

  • How to install an init.d script in ubuntu?

    - by suhail
    i am trying to install an init.d script, to run celery for scheduling tasks. Here is the steps i followed: copied the file celeryd and pasted it in folder /etc/init.d/ created a configuration file celeryd in folder /etc/default/ now when i tried to start it by sudo /etc/init.d/celeryd start, it throws error sudo: /etc/init.d/celeryd: command not found I googled about how to install init.d, i got this SO-question. it says to issue a uname -a and when i does i get this: Linux capsonesystem8-desktop 3.2.0-43-generic-pae #68-Ubuntu SMP Wed May 15 03:55:10 UTC 2013 i686 i686 i386 GNU/Linux and also it says use utils like insserv to enable init.d script so tried: insserv /etc/init.d/celeryd but it throws error insserv: command not found so i tried to install insserv sudo apt-get install insserv. but it say aleady installed: insserv is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded. So how to install init.d script?? Any help will be appreciated. update1: when i tried: $ sh -x /etc/init.d/celeryd start it reveal some errors. may be that is why the service won’t start. update2: I cleared all the errors when i run $ sh -x /etc/init.d/celeryd start but still sudo /etc/init.d/celeryd start throws command not found error

    Read the article

  • Upstart: best way for shutdown hook?

    - by Binarus
    Hi, since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much, Binarus

    Read the article

  • Best way to make a shutdown hook?

    - by Binarus
    Since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Is my JS/Jquery methodology good?

    - by absentx
    I always struggle with which of the stack sites is best to post "questions of theory" like this, but I think programmers is the best, if not, as usual a mod will move it etc... I am seeking critique on what has become my normal methodology of writing javascript. I have become heavily reliant on the Jquery library, but I think this has helped me learn the native language better also. Anyways, please critique the following style of JS coding...buried are a lot of questions of scope, if you could point out the strengths and weaknesses of this style I would appreciate it. var critique ={ start: function(){ globalness = 'GLOBAL-GLOBAL'; //available to all critique's methods var notglobalness = 'LOCAL-LOCAL';// only available to critiques start method //am I using the "method" teminology properly here?? $('#stuff').on('click','a.closer-target',function(){ $target = $(this); if($target.hasClass('active')){ $target.removeClass('active'); } else{ $target.addClass('active'); critique.madness($target); } }) console.log(notglobalness+': at least I am useful at home'); console.log('note here that: '+notglobalness+' is no longer available after this point, lets continue on:'); critique.madness(notglobalness); }, madness: function($e){ // do a bunch of awesomeness with $e //but continue to keep it seperate because you think its best to keep things isolated. //send to the next function when complete here console.log('here is globalness, which is still available from the start method of critique!! ' + globalness); console.log('lets see if the globalness carries on to a new var object!!'); console.log('the locally isolated variable of NOTGLOBALNESS is available here because it was passed to this method, lets show it:'+$e); carryOn.start(); } } //end critique var carryOn={ start: function(){ console.log('any chance critique.globalness will work here??? lets see: ' +globalness); console.log('it absolutely does'); } } $(document).ready(critique.start);

    Read the article

  • Configuring log4j on weblogic server for web applications.

    - by adejuanc
    To configure Weblogic server : 1.- Read the following link : How to Use Log4j with WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html#wp1014610 Here the step by step : 2.- Go to WL_HOME/server/lib and copy wllog4j.jar to the server CLASSPATH, to do this copy the file into DOMAIN_NAME/lib 3.- Download log4j jar (in my case I had not the file) from http://logging.apache.org/log4j/1.2/download.html , in this case the last available version is log4j-1.2.17.jar, and copy the file into DOMAIN_NAME/lib (As step 2). 4.- In this case I activate log4j using WLST (Weblogic Scripting Tool), as bellow : 4.1 .- As you're using windows, execute a terminal window and go to DOMAIN_NAME/bin and run the file setDomainEnv.cmd (this file will set the environment to run java). 4.2 .- Execute the following comands : C:\>java weblogic.WLST wls:/offline> connect('username','password') wls:/mydomain/serverConfig> edit() wls:/mydomain/edit> startEdit() wls:/mydomain/edit !> cd("Servers/$YOUR_SERVER_NAME/Log/$YOUR_SERVER_NAME" wls:/mydomain/edit/Servers/myserver/Log/myserver !> cmo.setLog4jLoggingEnabled(true) wls:/mydomain/edit/Servers/myserver/Log/myserver !> save() wls:/mydomain/edit/Servers/myserver/Log/myserver !> activate() you can use ls() to list the objects under the WLS directory this will activate log4j to use it with WLS. Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html To configure applications : 1. Create a log4j.properties file as bellow log4j.debug=TRUE log4j.rootLogger=INFO, R log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=/home/server.log log4j.appender.R.MaxFileSize=100KB log4j.appender.R.MaxBackupIndex=5 log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c – %m%n 2. Copy the file to /WEB-INF/classes directory. of your application. 3.- implement also the last action provided to activate log4j on WLS

    Read the article

  • Can't install or update Ubuntu after using parameter acpi_osi = Linux

    - by Lucas Leitão
    I recently had an issue with my acer 4736z notebook because I was having a blank screen after booting the OS, then someone told me to use the parameter GRUB_CMDLINE_LINUX_DEFAULT = "quiet splash acpi_osi = Linux" after quiet splash inside the grub. It worked for me, but since then I can't install a thing or update anything on Linux because it says Removing linux-image-extra-3.5.0-17-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-17-generic /boot/vmlinuz-3.5.0-17-generic update-initramfs: Deleting /boot/initrd.img-3.5.0-17-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-17-generic /boot/vmlinuz-3.5.0-17-generic /usr/sbin/grub-mkconfig: 11: /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT: not found run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-extra-3.5.0-17-generic.postrm line 328. dpkg: erro ao processar linux-image-extra-3.5.0-17-generic (--remove): sub-processo script post-removal instalado retornou estado de saída de erro 1 Removendo linux-image-3.5.0-17-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-17-generic /boot/vmlinuz-3.5.0-17-generic update-initramfs: Deleting /boot/initrd.img-3.5.0-17-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-17-generic /boot/vmlinuz-3.5.0-17-generic /usr/sbin/grub-mkconfig: 11: /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT: not found run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.5.0-17-generic.postrm line 328. dpkg: erro ao processar linux-image-3.5.0-17-generic (--remove): sub-processo script post-removal instalado retornou estado de saída de erro 1 Erros foram encontrados durante o processamento de: linux-image-extra-3.5.0-17-generic linux-image-3.5.0-17-generic E: Sub-process /usr/bin/dpkg returned an error code (1) I already tried to remove older kernels but it gives me the same message. Do you have a clue about what should I do?

    Read the article

  • Copying photos from camera stalls - how to track down issue?

    - by Hamish Downer
    When I copy files from my camera (connected via USB) to the SSD in my laptop a few files get copied and then the copy stalls. I'm not sure why, any ideas or things to investigate appreciated, or bug reports to go and look at. I have read this answer - the camera (Canon 40D in case that matters) mounts fine using gvfs. I can see the photos in Nautilus, or in the terminal (in /run/user/username/gvfs/... ) and I can copy a few photos, but not many. Using the terminal or Nautilus the process hangs until the camera goes to sleep. Digikam fails to copy any at all, as does Rapid Photo Downloader. Shotwell did manage it in the end, but that is very much a work around for me. I have disabled thumbnail generation by nautilus. Load average stays about 1 while this is happening, while CPU usage is half idle, half wait (and a little user/sys for other programs). None of the programs at the top of the cpu list in top are related to copying photos. There is not much in the logs - from /var/log/syslog Dec 2 16:20:52 mishtop dbus[945]: [system] Activating service name='org.freedesktop.UDisks' (using servicehelper) Dec 2 16:20:52 mishtop dbus[945]: [system] Successfully activated service 'org.freedesktop.UDisks' Dec 2 16:21:24 mishtop kernel: [ 2297.180130] usb 2-2: new high-speed USB device number 4 using ehci_hcd Dec 2 16:21:24 mishtop kernel: [ 2297.314272] usb 2-2: New USB device found, idVendor=04a9, idProduct=3146 Dec 2 16:21:24 mishtop kernel: [ 2297.314278] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Dec 2 16:21:24 mishtop kernel: [ 2297.314283] usb 2-2: Product: Canon Digital Camera Dec 2 16:21:24 mishtop kernel: [ 2297.314287] usb 2-2: Manufacturer: Canon Inc. Dec 2 16:21:24 mishtop mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Dec 2 16:21:24 mishtop mtp-probe: bus: 2, device: 4 was not an MTP device This problem has only started recently and I've had all the hardware for ages. I have also recently upgraded to 12.10, though I'm not sure if the problem started when I upgraded or after the upgrade. I also note this similar question but it is currently unanswered and I'm providing more detail

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • Allow non-sudo group to control Upstart job

    - by Angle O'Saxon
    I'm trying to set up an Upstart job to run on system startup, and that can also be started/stopped by members of a group other than sudo. With a previous version, I usedupdate-rc.d and scripts stored in /etc/init.d/ to get this working by adding %Group ALL = NOPASSWD: /etc/init.d/scriptname to my sudoers file, but I can't seem to get an equivalent working for Upstart. I tried adding %Group ALL = NOPASSWD: /sbin/initctl start jobname to the sudoers file, but trying to run the command start jobname produces this error: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.21" (uid=1000 pid=5148 comm="start jobname " interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") As near as I can tell, that's a complaint about how my user account isn't given the power to send 'Start' messages in the D-Bus config file for Upstart. I haven't been able to actually find any information on how to edit that file to give a group permission to access a specific service--does such an option exist? Is there a way to edit the Sudoers file so I can run the job without editing the config file? Am I better off just sticking with the previous version?

    Read the article

  • Client Side Prediction for a Look Vector

    - by Mike Sawayda
    So I am making a first person networked shooter. I am working on client-side prediction where I am predicting player position and look vectors client-side based on input messages received from the server. Right now I am only worried about the look vectors though. I am receiving the correct look vector from the server about 20 times per second and I am checking that against the look vector that I have client side. I want to interpolate the clients look vector towards the correct one that is server side over a period of time. Therefore no matter how far you are away from the servers look vector you will interpolate to it over the same amount of time. Ex. if you were 10 degrees off it would take the same amount of time as if you were 2 degrees off to be correctly lined up with the server copy. My code looks something like this but the problem is that the amount that you are changing the clients copy gets infinitesimally small so you will actually never reach the servers copy. This is because I am always calculating the difference and only moving by a percentage of that every frame. Does anyone have any suggestions on how to interpolate towards the servers copy correctly? if(rotationDiffY > ClientSideAttributes::minRotation) { if(serverRotY > clientRotY) { playerObjects[i]->collisionObject->rotation.y += (rotationDiffY * deltaTime); } else { playerObjects[i]->collisionObject->rotation.y -= (rotationDiffY deltaTime); } }

    Read the article

  • Need help: jBPM installation issue

    - by Kouky
    This is Ms Kahina, I’m getting started with jBPM & Java EE, I tried to install the jBPM 5.4 version by following all the steps stated in the iBPM user guide as below: 1- I’ve installed Java (JDK 1.7) and ant ( Apache-ant-1.9.4). 2- I’ve set both JAVA_HOME and ANT_HOME environement variables. 3- I’ve downloaded the jBPM 5.4 full installer and run the installation script “ant install.demo” and after that the start script “ant start.demo”. But unfortunately jBPM 5.4 is not working. When running the installation Script I’m getting successful message but with the following warnings : [copy] Warning: Could not find file C:\jbpm-installer\db\task-persistence.xml to copy. [copy] Warning: Could not find file C:\jbpm-installer\db\Taskorm.xml to copy. The start demo was successful. Actually when I try to open any tool provided with the jBPM such us the jBPM-console for example I’m getting the following messages: Address not found or sometimes the http status 404 occurred even if the Welcome page of jBoss AS was opened at http://localhost:8080/. Please need your assistance to sort out this issue in order to move forward in my project as I'm blocked in the installation stage since more than a week now, I don’t know if this is related to the jBoss AS7 or to any other thing that I didn’t find out. Looking forward to hearing from you. Thanks Kahina

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >